query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
c94a0b4c7af3c43f027b3e3e6d2fe35c
|
When things matter: A survey on data-centric internet of things
|
[
{
"docid": "91e93ebb9503a83f20d349d87d8f74dd",
"text": "Data stream mining is an active research area that has recently emerged to discover knowledge from large amounts of continuously generated data. In this context, several data stream clustering algorithms have been proposed to perform unsupervised learning. Nevertheless, data stream clustering imposes several challenges to be addressed, such as dealing with nonstationary, unbounded data that arrive in an online fashion. The intrinsic nature of stream data requires the development of algorithms capable of performing fast and incremental processing of data objects, suitably addressing time and memory limitations. In this article, we present a survey of data stream clustering algorithms, providing a thorough discussion of the main design components of state-of-the-art algorithms. In addition, this work addresses the temporal aspects involved in data stream clustering, and presents an overview of the usually employed experimental methodologies. A number of references are provided that describe applications of data stream clustering in different domains, such as network intrusion detection, sensor networks, and stock market analysis. Information regarding software packages and data repositories are also available for helping researchers and practitioners. Finally, some important issues and open questions that can be subject of future research are discussed.",
"title": ""
}
] |
[
{
"docid": "153d23d5f736b9a9e0f3cb88e61dc400",
"text": "Context\nTrichostasis spinulosa (TS) is a common but underdiagnosed follicular disorder involving retention of successive telogen hair in the hair follicle. Laser hair removal is a newer treatment modality for TS with promising results.\n\n\nAims\nThis study aims to evaluate the efficacy of 800 nm diode laser to treat TS in Asian patients.\n\n\nSubjects and Methods\nWe treated 50 Indian subjects (Fitzpatrick skin phototype IV-V) with untreated trichostasis spinulosa on the nose with 800 nm diode laser at fluence ranging from 22 to 30 J/cm2 and pulse width of 30 ms. The patients were given two sittings at 8 week intervals. The evaluation was done by blinded assessment of photographs by independent dermatologists.\n\n\nResults\nTotally 45 (90%) patients had complete clearance of the lesions at the end of treatment. Five (10%) subjects needed one-third sitting for complete clearance. 45 patients had complete resolution and no recurrence even at 2 years follow-up visit. 5 patients had partial recurrence after 8-9 months and needed an extra laser session.\n\n\nConclusions\nLaser hair reduction in patients with TS targets and removes the hair follicles which are responsible for the plugged appearance. Due to permanent ablation of the hair bulb and bulge, the recurrence which is often seen with other modalities of treatment for TS is not observed here.",
"title": ""
},
{
"docid": "acb3aaaf79ebc3fc65724e92e4d076aa",
"text": "Lay dispositionism refers to lay people's tendency to use traits as the basic unit of analysis in social perception (L. Ross & R. E. Nisbett, 1991). Five studies explored the relation between the practices indicative of lay dispositionism and people's implicit theories about the nature of personal attributes. As predicted, compared with those who believed that personal attributes are malleable (incremental theorists), those who believed in fixed traits (entity theorists) used traits or trait-relevant information to make stronger future behavioral predictions (Studies 1 and 2) and made stronger trait inferences from behavior (Study 3). Moreover, the relation between implicit theories and lay dispositionism was found in both the United States (a more individualistic culture) and Hong Kong (a more collectivistic culture), suggesting this relation to be generalizable across cultures (Study 4). Finally, an experiment in which implicit theories were manipulated provided preliminary evidence for the possible causal role of implicit theories in lay dispositionism (Study 5).",
"title": ""
},
{
"docid": "e7a13f146c77d52b72a691ebb6671240",
"text": "The recent diversification of telephony infrastructure allows users to communicate through landlines, mobile phones and VoIP phones. However, call metadata such as Caller-ID is either not transferred or transferred without verification across these networks, allowing attackers to maliciously alter it. In this paper, we develop PinDr0p, a mechanism to assist users in determining call provenance - the source and the path taken by a call. Our techniques detect and measure single-ended audio features to identify all of the applied voice codecs, calculate packet loss and noise profiles, while remaining agnostic to characteristics of the speaker's voice (as this may legitimately change when interacting with a large organization). In the absence of verifiable call metadata, these features in combination with machine learning allow us to determine the traversal of a call through as many as three different providers (e.g., cellular, then VoIP, then PSTN and all combinations and subsets thereof) with 91.6% accuracy. Moreover, we show that once we identify and characterize the networks traversed, we can create detailed fingerprints for a call source. Using these fingerprints we show that we are able to distinguish between calls made using specific PSTN, cellular, Vonage, Skype and other hard and soft phones from locations across the world with over 90% accuracy. In so doing, we provide a first step in accurately determining the provenance of a call.",
"title": ""
},
{
"docid": "1b9778fd4238c4d562b01b875d2f72de",
"text": "In this paper a stain sensor to measure large strain (80%) in textiles is presented. It consists of a mixture of 50wt-% thermoplastic elastomer (TPE) and 50wt-% carbon black particles and is fiber-shaped with a diameter of 0.315mm. The attachment of the sensor to the textile is realized using a silicone film. This sensor configuration was characterized using a strain tester and measuring the resistance (extension-retraction cycles): It showed a linear resistance response to strain, a small hysteresis, no ageing effects and a small dependance on the strain velocity. The total mean error caused by all these effects was +/-5.5% in strain. Washing several times in a conventional washing machine did not influence the sensor properties. The paper finishes by showing an example application where 21 strain sensors were integrated into a catsuit. With this garment, 27 upper body postures could be recognized with an accuracy of 97%.",
"title": ""
},
{
"docid": "5f3dfd97498034d0a104bf41149651f2",
"text": "BACKGROUND\nResearch questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example.\n\n\nMETHODS\nA questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted.\n\n\nRESULTS\nThe adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting.\n\n\nCONCLUSIONS\nThe failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times.",
"title": ""
},
{
"docid": "23677c0107696de3cc630f424484284a",
"text": "With the development of expressway, the vehicle path recognition based on RFID is designed and an Electronic Toll Collection system of expressway will be implemented. It uses a passive RFID tag as carrier to identify Actual vehicle path in loop road. The ETC system will toll collection without parking, also census traffic flow and audit road maintenance fees. It is necessary to improve expressway management.",
"title": ""
},
{
"docid": "02a6e024c1d318862ad4c17b9a56ca36",
"text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.",
"title": ""
},
{
"docid": "8f21eee8a4320baebe0fe40364f6580e",
"text": "The dup system related subjects others recvfrom and user access methods. The minimal facilities they make up. A product before tackling 'the design, decisions they probably should definitely. Multiplexer'' interprocess communication in earlier addison wesley has the important features a tutorial. Since some operating system unstructured devices a process init see. At berkeley software in earlier authoritative technical information on write operations. The lowest unused multiprocessor support for, use this determination. No name dot spelled with the system. Later it a file several, reasons often single user interfacesis excluded except.",
"title": ""
},
{
"docid": "b637196c4627fd463ca54d0efeb87370",
"text": "Vision-based lane detection is a critical component of modern automotive active safety systems. Although a number of robust and accurate lane estimation (LE) algorithms have been proposed, computationally efficient systems that can be realized on embedded platforms have been less explored and addressed. This paper presents a framework that incorporates contextual cues for LE to further enhance the performance in terms of both computational efficiency and accuracy. The proposed context-aware LE framework considers the state of the ego vehicle, its surroundings, and the system-level requirements to adapt and scale the LE process resulting in substantial computational savings. This is accomplished by synergistically fusing data from multiple sensors along with the visual data to define the context around the ego vehicle. The context is then incorporated as an input to the LE process to scale it depending on the contextual requirements. A detailed evaluation of the proposed framework on real-world driving conditions shows that the dynamic and static configuration of the lane detection process results in computation savings as high as 90%, without compromising on the accuracy of LE.",
"title": ""
},
{
"docid": "2e57cf33adf048552c4a06f6a2f1c132",
"text": "Efficient fastest path computation in the presence of varying speed conditions on a large scale road network is an essential problem in modern navigation systems. Factors affecting road speed, such as weather, time of day, and vehicle type, need to be considered in order to select fast routes that match current driving conditions. Most existing systems compute fastest paths based on road Euclidean distance and a small set of predefined road speeds. However, “History is often the best teacher”. Historical traffic data or driving patterns are often more useful than the simple Euclidean distance-based computation because people must have good reasons to choose these routes, e.g., they may want to avoid those that pass through high crime areas at night or that likely encounter accidents, road construction, or traffic jams. In this paper, we present an adaptive fastest path algorithm capable of efficiently accounting for important driving and speed patterns mined from a large set of traffic data. The algorithm is based on the following observations: (1) The hierarchy of roads can be used to partition the road network into areas, and different path pre-computation strategies can be used at the area level, (2) we can limit our route search strategy to edges and path segments that are actually frequently traveled in the data, and (3) drivers usually traverse the road network through the largest roads available given the distance of the trip, except if there are small roads with a significant speed advantage over the large ones. Through an extensive experimental evaluation on real road networks we show that our algorithm provides desirable (short and well-supported) routes, and that it is significantly faster than competing methods.",
"title": ""
},
{
"docid": "4fa1b8c7396e636216d0c1af0d1adf15",
"text": "Modern smartphone platforms have millions of apps, many of which request permissions to access private data and resources, like user accounts or location. While these smartphone platforms provide varying degrees of control over these permissions, the sheer number of decisions that users are expected to manage has been shown to be unrealistically high. Prior research has shown that users are often unaware of, if not uncomfortable with, many of their permission settings. Prior work also suggests that it is theoretically possible to predict many of the privacy settings a user would want by asking the user a small number of questions. However, this approach has neither been operationalized nor evaluated with actual users before. We report on a field study (n=72) in which we implemented and evaluated a Personalized Privacy Assistant (PPA) with participants using their own Android devices. The results of our study are encouraging. We find that 78.7% of the recommendations made by the PPA were adopted by users. Following initial recommendations on permission settings, participants were motivated to further review and modify their settings with daily “privacy nudges.” Despite showing substantial engagement with these nudges, participants only changed 5.1% of the settings previously adopted based on the PPA’s recommendations. The PPA and its recommendations were perceived as useful and usable. We discuss the implications of our results for mobile permission management and the design of personalized privacy assistant solutions.",
"title": ""
},
{
"docid": "9c17325056a96f5086d324936c9f06ce",
"text": "Fingertip suction is investigated using a compliant, underactuated, tendon-driven hand designed for underwater mobile manipulation. Tendon routing and joint stiffnesses are designed to provide ease of closure while maintaining finger rigidity, allowing the hand to pinch small objects, as well as secure large objects, without diminishing strength. While the hand is designed to grasp a range of objects, the addition of light suction flow to the fingertips is especially effective for small, low-friction (slippery) objects. Numerical simulations confirm that changing suction parameters can increase the object acquisition region, providing guidelines for future versions of the hand.",
"title": ""
},
{
"docid": "10947ff2f981ddf28934df8ac640208d",
"text": "The future of tropical forest biodiversity depends more than ever on the effective management of human-modified landscapes, presenting a daunting challenge to conservation practitioners and land use managers. We provide a critical synthesis of the scientific insights that guide our understanding of patterns and processes underpinning forest biodiversity in the human-modified tropics, and present a conceptual framework that integrates a broad range of social and ecological factors that define and contextualize the possible future of tropical forest species. A growing body of research demonstrates that spatial and temporal patterns of biodiversity are the dynamic product of interacting historical and contemporary human and ecological processes. These processes vary radically in their relative importance within and among regions, and have effects that may take years to become fully manifest. Interpreting biodiversity research findings is frequently made difficult by constrained study designs, low congruence in species responses to disturbance, shifting baselines and an over-dependence on comparative inferences from a small number of well studied localities. Spatial and temporal heterogeneity in the potential prospects for biodiversity conservation can be explained by regional differences in biotic vulnerability and anthropogenic legacies, an ever-tighter coupling of human-ecological systems and the influence of global environmental change. These differences provide both challenges and opportunities for biodiversity conservation. Building upon our synthesis we outline a simple adaptive-landscape planning framework that can help guide a new research agenda to enhance biodiversity conservation prospects in the human-modified tropics.",
"title": ""
},
{
"docid": "aabed671a466730e273225d8ee572f73",
"text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.",
"title": ""
},
{
"docid": "983ce31c421bcf16bd44633a4b290d41",
"text": "Continuous development of appropriate software packages makes simulation of power engineering problems more and more effective. However, these analysis tools differ from each other considerably from the point of view of the applicability to a special problem. The authors compare two widespread environments: MATLAB-SIMULINK, which can be used to simulate a wide spectrum of dynamic systems, and ATP-EMTP, which is specific software to simulate power system transient problems. In the first part of the paper the components (function-blocks) that can be used to build a circuit, are listed. Then three examples are presented which demonstrate the capabilities and underline the advantages and drawbacks of both programs.",
"title": ""
},
{
"docid": "13ae30bc5bcb0714fe752fbe9c7e5de8",
"text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.",
"title": ""
},
{
"docid": "6ef663a3ed45bfce199d3b8f0c3143a8",
"text": "In this paper we present a quaternion-based Extended Kalman Filter (EKF) for estimating the three-dimensional orientation of a rigid body. The EKF exploits the measurements from an Inertial Measurement Unit (IMU) that is integrated with a tri-axial magnetic sensor. Magnetic disturbances and gyro bias errors are modeled and compensated by including them in the filter state vector. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system that describes the process of motion tracking by the IMU is observable, namely it may provide sufficient information for performing the estimation task with bounded estimation errors. The observability conditions are that the magnetic field, perturbed by first-order Gauss-Markov magnetic variations, and the gravity vector are not collinear and that the IMU is subject to some angular motions. Computer simulations and experimental testing are presented to evaluate the algorithm performance, including when the observability conditions are critical.",
"title": ""
},
{
"docid": "c1632ead357d08c3e019bb12ff75e756",
"text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.",
"title": ""
},
{
"docid": "8bd93bf2043a356ff40531acb372992d",
"text": "Liver lesion segmentation is an important step for liver cancer diagnosis, treatment planning and treatment evaluation. LiTS (Liver Tumor Segmentation Challenge) provides a common testbed for comparing different automatic liver lesion segmentation methods. We participate in this challenge by developing a deep convolutional neural network (DCNN) method. The particular DCNN model works in 2.5D in that it takes a stack of adjacent slices as input and produces the segmentation map corresponding to the center slice. The model has 32 layers in total and makes use of both long range concatenation connections of U-Net [1] and short-range residual connections from ResNet [2]. The model was trained using the 130 LiTS training datasets and achieved an average Dice score of 0.67 when evaluated on the 70 test CT scans, which ranked first for the LiTS challenge at the time of the ISBI 2017 conference.",
"title": ""
},
{
"docid": "d72092cd909d88e18598925024dc6b97",
"text": "This paper focuses on the robust dissipative fault-tolerant control problem for one kind of Takagi-Sugeno (T-S) fuzzy descriptor system with actuator failures. The solvable conditions of the robust dissipative fault-tolerant controller are given by using of the Lyapunov theory, Lagrange interpolation polynomial theory, etc. These solvable conditions not only make the closed loop system dissipative, but also integral for the actuator failure situation. The dissipative fault-tolerant controller design methods are given by the aid of the linear matrix inequality toolbox, the function of randomly generated matrix, loop statement, and numerical solution, etc. Thus, simulation process is fully intelligent and efficient. At the same time, the design methods are also obtained for the passive and H∞ fault-tolerant controllers. This explains the fact that the dissipative control unifies H∞ control and passive control. Finally, we give example that illustrates our results.",
"title": ""
}
] |
scidocsrr
|
077776e8dace52304bcc82d7a34de588
|
Learning Disabilities in Arithmetic and Mathematics Theoretical and Empirical Perspectives
|
[
{
"docid": "1a5c79a9f2c22681dc558876d5b358e5",
"text": "An evolution-based framework for understanding biological and cultural influences on children's cognitive and academic development is presented. The utility of this framework is illustrated within the mathematical domain and serves as a foundation for examining current approaches to educational reform in the United States. Within this framework, there are two general classes of cognitive ability, biologically primary and biologically secondary. Biologically primary cognitive abilities appear to have evolved largely by means of natural or sexual selection. Biologically secondary cognitive abilities reflect the co-optation of primary abilities for purposes other than the original evolution-based function and appear to develop only in specific cultural contexts. A distinction between these classes of ability has important implications for understanding children's cognitive development and achievement.",
"title": ""
}
] |
[
{
"docid": "8b22382f560edbd6776e080acd07fd7e",
"text": "Emerging evidence suggests that people do not have difficulty judging covariation per se but rather have difficulty decoding standard displays such as scatterplots. Using the data analysis software Tinkerplots, I demonstrate various alternative representations that students appear to be able to use quite effectively to make judgments about covariation. More generally, I argue that data analysis instruction in K-12 should be structured according to how statistical reasoning develops in young students and should, for the time begin, not target specific graphical representations as objectives of instruction. TINKERPLOTS: SOFTWARE FOR THE MIDDLE SCHOOL The computer's potential to improve the teaching of data analysis is now a well-known It includes its power to illuminate key concepts through simulations and multiple-linked representations. It also includes its ability to free students up, at the appropriate time, from time-intensive tasks—from what National Council of Teachers of Mathematics (1989) Standards referred to as the \" narrow aspects of statistics \" (p. 113). This potentially allows instruction to focus more attention on the processes of data analysis—exploring substantive questions of interest, searching for and interpreting patterns and trends in data, and communicating findings. However, as Biehler (1995) has suggested, the younger the student, the more difficult it is to design an appropriate tool for learning statistics. Most of the existing tools for young students have been developed from the \" top down. \" They provide a subset of conventional plots and thus are simpler than professional tools only in that they have fewer options. These \" simplified professional tools \" are ill-suited to younger students who \" need a tool that is designed from their bottom-up perspective of statistical novices and can develop in various ways into a full professional tool (not vice versa) \" (p.3). Tinkerplots is a data analysis tool for the middle school that we are designing \" from the bottom up \" (Konold & Miller, 2001). When a data set is first opened in Tinkerplots, a plot window appears showing a haphazard arrangement of data icons on the screen. As in Tabletop (see Hancock, Kaput & Goldsmith, 1992), each icon represents an individual case. But in Tinkerplots, rather than choosing from a menu of existing plot types (e.g., bar graph, pie chart, scatterplot), students progressively organize the data using a small set of intuitive operators including \" stack, \" \" order, \" and \" separate \". By using these operators in different combinations, …",
"title": ""
},
{
"docid": "18df6df67ced4564b3873d487a25f2d9",
"text": "The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. A key issue is that the neural network training problem is nonconvex, hence optimization algorithms may not return a global minima. This paper provides sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Our conditions require both the network output and the regularization to be positively homogeneous functions of the network parameters, with the regularization being designed to control the network size. Our results apply to networks with one hidden layer, where size is measured by the number of neurons in the hidden layer, and multiple deep subnetworks connected in parallel, where size is measured by the number of subnetworks.",
"title": ""
},
{
"docid": "605201e9b3401149da7e0e22fdbc908b",
"text": "Roadway traffic safety is a major concern for transportation governing agencies as well as ordinary citizens. In order to give safe driving suggestions, careful analysis of roadway traffic data is critical to find out variables that are closely related to fatal accidents. In this paper we apply statistics analysis and data mining algorithms on the FARS Fatal Accident dataset as an attempt to address this problem. The relationship between fatal rate and other attributes including collision manner, weather, surface condition, light condition, and drunk driver were investigated. Association rules were discovered by Apriori algorithm, classification model was built by Naive Bayes classifier, and clusters were formed by simple K-means clustering algorithm. Certain safety driving suggestions were made based on statistics, association rules, classification model, and clusters obtained.",
"title": ""
},
{
"docid": "4fcc69e9f3ce30372fe7ef2d5f21a0c3",
"text": "F ar more than any other type of illness, mental disorders are subject to negative judgements and stigmatization. Many patients not only have to cope with the often devastating effects of their illness, but also suffer from social exclusion and prejudices. Stigmatization of the mentally ill has a long tradition, and the word “stigmatization” itself indicates the negative connotations: in ancient Greece, a “stigma” was a brand to mark slaves or criminals. For millennia, society did not treat persons suffering from depression, autism, schizophrenia and other mental illnesses much better than slaves or criminals: they were imprisoned, tortured or killed. During the Middle Ages, mental illness was regarded as a punishment from God: sufferers were thought to be possessed by the devil and were burned at the stake, or thrown in penitentiaries and madhouses where they were chained to the walls or their beds. During the Enlightenment, the mentally ill were finally freed from their chains and institutions were established to help sufferers of mental illness. However, stigmatization and discrimination reached an unfortunate peak during the Nazi reign in Germany when hundreds of thousands of mentally ill people were murdered or sterilized. ...................................................... “Structural discrimination of the mentally ill is still pervasive, whether in legislation or in rehabilitation efforts.” ......................................................",
"title": ""
},
{
"docid": "92b8206a1a5db0be7df28ed2e645aafc",
"text": "Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency. They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results. In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new \"super-separable\" convolution operation that further reduces the number of parameters and computational cost of the models.",
"title": ""
},
{
"docid": "1269ec9f6c6e6184d34b0924f3ce08e7",
"text": "Model-Driven Development (MDD) emphasizes the use of models at a higher abstraction level in the software development process and argues in favor of automation via model execution, transformation, and code generation. However, one current challenge is how to manage requirements during this process whilst simultaneously stressing the benefits of automation. This paper presents a systematic review of the current use of requirements engineering techniques in MDD processes and their actual automation level. 72 papers from the last decade have been reviewed from an initial set of 884 papers. The results show that although MDD techniques are used to a great extent in platform-independent models, platform-specific models, and at code level, at the requirements level most MDD approaches use only partially defined requirements models or even natural language. We additionally identify several research gaps such as a need for more efforts to explicitly deal with requirements traceability and the provision of better tool support.",
"title": ""
},
{
"docid": "cb0e20fc45e4fe7af42732768134ecf7",
"text": "Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.",
"title": ""
},
{
"docid": "f5002cfd1b7b7b547e210d62b8655074",
"text": "In this work, various layout options for ESD diodes' PN junction geometry and metal routing are investigated. The current compression point (ICP) is introduced to define the maximum current handling capability of ESD protection devices. The figures-of-merit ICP/C and RON*C are used to compare the performance of the structures investigated herein.",
"title": ""
},
{
"docid": "8cbbf630ac46c54b9d5369fa24a50d91",
"text": "We propose a computational method for verifying a state-space safety constraint of a network of interconnected dynamical systems satisfying a dissipativity property. We construct an invariant set as the sublevel set of a Lyapunov function comprised of local storage functions for each subsystem. This approach requires only knowledge of a local dissipativity property for each subsystem and the static interconnection matrix for the network, and we pose the safety verification as a sum-of-squares feasibility problem. In addition to reducing the computational burden of system design, we allow the safety constraint and initial conditions to depend on an unknown equilibrium, thus offering increased flexibility over existing techniques.",
"title": ""
},
{
"docid": "05366166b02ebd29abeb2dcf67710981",
"text": "Wireless access to the Internet via PDAs (personal digital assistants) provides Web type services in the mobile world. What we are lacking are design guidelines for such PDA services. For Web publishing, however, there are many resources to look for guidelines. The guidelines can be classified according to which aspect of the Web media they are related: software/hardware, content and its organization, or aesthetics and layout. In order to be applicable to PDA services, these guidelines have to be modified. In this paper we analyze the main characteristics of PDAs and their influence to the guidelines.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "bc7d0895bcbb47c8bf79d0ba7078b209",
"text": "The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.",
"title": ""
},
{
"docid": "77d0845463db0f4e61864b37ec1259b7",
"text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.",
"title": ""
},
{
"docid": "aecaa8c028c4d1098d44d755344ad2fc",
"text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.",
"title": ""
},
{
"docid": "f7912f79a3a5764496bfff65c91c98b0",
"text": "This paper proposes a real-time embedded fall detection system using a DVS(Dynamic Vision Sensor)(Berner et al. [2014]) that has never been used for traditional fall detection, a dataset for fall detection using that, and a DVSTN(DVS-Temporal Network). The first contribution is building a DVS Falls Dataset, which made our network to recognize a much greater variety of falls than the existing datasets that existed before and solved privacy issues using the DVS. Secondly, we introduce the DVS-TN : optimized deep learning network to detect falls using DVS. Finally, we implemented a fall detection system which can run on low-computing H/W with real-time, and tested on DVS Falls Dataset that takes into account various falls situations. Our approach achieved 95.5% on the F1-score and operates at 31.25 FPS on NVIDIA Jetson TX1 board.",
"title": ""
},
{
"docid": "8de1acc08d32f8840de8375078f2369a",
"text": "Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.",
"title": ""
},
{
"docid": "98729fc6a6b95222e6a6a12aa9a7ded7",
"text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.",
"title": ""
},
{
"docid": "94866e444039836c17c96434a9163517",
"text": "Query suggestion refers to the process of suggesting related queries to search engine users. Most existing researches have focused on improving the relevance of suggested queries. In this paper, we introduce the concept of diversifying the content of the search results from suggested queries while keeping the suggestion relevant. Our framework first retrieves a set of query candidates from search engine logs using random walk and other techniques. We then re-rank the suggested queries by ranking them in the order which maximizes the diversification function that measures the difference between the original search results and the results from suggested queries. The diversification function we proposed includes features like ODP category, URL and domain similarity and so on. One important outcome from our research which contradicts with most existing researches is that, with the increase of suggestion relevance, the similarity between the queries actually decreases. Experiments are conducted on a large set of human-labeled data, which is randomly sampled from a commercial search engine's log. Results indicate that the post-ranking framework significantly improves the relevance of suggested queries by comparing to existing models.",
"title": ""
},
{
"docid": "0cd077bec6516b3cdb86a8ccd185df78",
"text": "In this paper, a general purpose multi-agent classifier system based on the blackboard architecture using reinforcement Learning techniques is proposed for tackling complex data classification problems. A trust metric for evaluating agent’s performance and expertise based on Q-learning and employing different voting processes is formulated. Specifically, multiple heterogeneous machine learning agents, are devised to form the expertise group for the proposed Coordinated Heterogeneous Intelligent Multi-Agent Classifier System (CHIMACS). To evaluate the effectiveness of CHIMACS, a variety of benchmark problems are used, including small and high dimensional datasets with and without noise. The results from CHIMACS are compared with those of individual ML models and ensemble methods. The results indicate that CHIMACS is effective in identifying classifier agent expertise and can combine their knowledge to improve the overall prediction performance.",
"title": ""
},
{
"docid": "6bb85ab1afb9b5349111d8fef120eb98",
"text": "This paper presents prediction-based dynamic resource allocation algorithms to scale video transcoding service on a given Infrastructure as a Service cloud. The proposed algorithms provide mechanisms for allocation and deallocation of virtual machines (VMs) to a cluster of video transcoding servers in a horizontal fashion. We use a two-step load prediction method, which allows proactive resource allocation with high prediction accuracy under real-time constraints. For cost-efficiency, our work supports transcoding of multiple on-demand video streams concurrently on a single VM, resulting in a reduced number of required VMs. We use video segmentation at group of pictures level, which splits video streams into smaller segments that can be transcoded independently of one another. The approach is demonstrated in a discrete-event simulation and an experimental evaluation involving two different load patterns.",
"title": ""
}
] |
scidocsrr
|
050d4cf7884a3ef3111199bf89a631b3
|
Learning And-Or Model to Represent Context and Occlusion for Car Detection and Viewpoint Estimation
|
[
{
"docid": "dd35821bef25be7591a74b9eb854455f",
"text": "Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data.",
"title": ""
},
{
"docid": "89297a4aef0d3251e8d947ccc2acacc7",
"text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.",
"title": ""
}
] |
[
{
"docid": "0d13b8a8f7a4584bc7c1402137e79a2c",
"text": "Different methods are proposed to learn phrase embedding, which can be mainly divided into two strands. The first strand is based on the distributional hypothesis to treat a phrase as one non-divisible unit and to learn phrase embedding based on its external context similar to learn word embedding. However, distributional methods cannot make use of the information embedded in component words and they also face data spareness problem. The second strand is based on the principle of compositionality to infer phrase embedding based on the embedding of its component words. Compositional methods would give erroneous result if a phrase is non-compositional. In this paper, we propose a hybrid method by a linear combination of the distributional component and the compositional component with an individualized phrase compositionality constraint. The phrase compositionality is automatically computed based on the distributional embedding of the phrase and its component words. Evaluation on five phrase level semantic tasks and experiments show that our proposed method has overall best performance. Most importantly, our method is more robust as it is less sensitive to datasets.",
"title": ""
},
{
"docid": "6b17bc7d1c6e19c4a4a5395f0b6b9ec9",
"text": "The development of accurate models for cyber-physical systems (CPSs) is hampered by the complexity of these systems, fundamental differences in the operation of cyber and physical components, and significant interdependencies among these components. Agent-based modeling shows promise in overcoming these challenges, due to the flexibility of software agents as autonomous and intelligent decision-making components. Semantic agent systems are even more capable, as the structure they provide facilitates the extraction of meaningful content from the data provided to the software agents. In this paper, we present a multi-agent model for a CPS, where the semantic capabilities are underpinned by sensor networks that provide information about the physical operation to the cyber infrastructure. This model is used to represent the static structure and dynamic behavior of an intelligent water distribution network as a CPS case study.",
"title": ""
},
{
"docid": "ecce348941aeda57bd66dbd7836923e6",
"text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.",
"title": ""
},
{
"docid": "d999bb4717dd07b2560a85c7c775eb0e",
"text": "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.",
"title": ""
},
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
},
{
"docid": "17055f15eeed79cb1364c4f8130e0b46",
"text": "Natural brains effectively integrate multiple sensory modalities and act upon the world through multiple effector types. As researchers strive to evolve more sophisticated neural controllers, confronting the challenge of multimodality is becoming increasingly important. As a solution, this paper presents a principled new approach to exploiting indirect encoding to incorporate multimodality based on the HyperNEAT generative neuroevolution algorithm called the multi-spatial substrate (MSS). The main idea is to place each input and output modality on its own independent plane. That way, the spatial separation of such groupings provides HyperNEAT an a priori hint on which neurons are associated with which that can be exploited from the start of evolution. To validate this approach, the MSS is compared with more conventional approaches to HyperNEAT substrate design in a multiagent domain featuring three input and two output modalities. The new approach both significantly outperforms conventional approaches and reduces the creative burden on the user to design the layout of the substrate, thereby opening formerly prohibitive multimodal problems to neuroevolution.",
"title": ""
},
{
"docid": "98b27823b392cc75dc9d74ce06ad1b5c",
"text": "Studies have shown that some musical pieces may preferentially activate reward centers in the brain. Less is known, however, about the structural aspects of music that are associated with this activation. Based on the music cognition literature, we propose two hypotheses for why some musical pieces are preferred over others. The first, the Absolute-Surprise Hypothesis, states that unexpected events in music directly lead to pleasure. The second, the Contrastive-Surprise Hypothesis, proposes that the juxtaposition of unexpected events and subsequent expected events leads to an overall rewarding response. We tested these hypotheses within the framework of information theory, using the measure of \"surprise.\" This information-theoretic variable mathematically describes how improbable an event is given a known distribution. We performed a statistical investigation of surprise in the harmonic structure of songs within a representative corpus of Western popular music, namely, the McGill Billboard Project corpus. We found that chords of songs in the top quartile of the Billboard chart showed greater average surprise than those in the bottom quartile. We also found that the different sections within top-quartile songs varied more in their average surprise than the sections within bottom-quartile songs. The results of this study are consistent with both the Absolute- and Contrastive-Surprise Hypotheses. Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis. The results of this statistical investigation have implications for both music cognition and the human neural mechanisms of esthetic judgments.",
"title": ""
},
{
"docid": "0c850cee404c406421de03cfd950c294",
"text": "Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.",
"title": ""
},
{
"docid": "9858386550b0193c079f1d7fe2b5b8b3",
"text": "Objective This study examined the associations between household food security (access to sufficient, safe, and nutritious food) during infancy and attachment and mental proficiency in toddlerhood. Methods Data from a longitudinal nationally representative sample of infants and toddlers (n = 8944) from the Early Childhood Longitudinal Study—9-month (2001–2002) and 24-month (2003–2004) surveys were used. Structural equation modeling was used to examine the direct and indirect associations between food insecurity at 9 months, and attachment and mental proficiency at 24 months. Results Food insecurity worked indirectly through depression and parenting practices to influence security of attachment and mental proficiency in toddlerhood. Conclusions Social policies that address the adequacy and predictability of food supplies in families with infants have the potential to affect parental depression and parenting behavior, and thereby attachment and cognitive development at very early ages.",
"title": ""
},
{
"docid": "865c1ee7044cbb23d858706aa1af1a63",
"text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damages and to eliminate the risks of safety hazards. This paper examines two types of unique faults found in photovoltaic (PV) array installations that have not been studied in the literature. One is a fault that occurs under low irradiance conditions. In some circumstances, fault current protection devices are unable to detect certain types of faults so that the fault may remain hidden in the PV system, even after irradiance increases. The other type of fault occurs when a string of PV modules is reversely connected, caused by inappropriate installation. This fault type brings new challenges for overcurrent protection devices because of the high rating voltage requirement. In both cases, these unique PV faults may subsequently lead to unexpected safety hazards, reduced system efficiency and reduced reliability.",
"title": ""
},
{
"docid": "512673431b96b78d7dedae39078976ac",
"text": "In order to use science to manage human–nature interactions, we need much more nuanced, and when possible, quantitative, analyses of the interplay among ecosystem services (ES), human well-being (HWB), and drivers of both ecosystem structure and function, as well as HWB. Despite a growing interest and extensive efforts in ES research in the past decade, systematic and quantitative work on the linkages between ES and HWB is rare in existing literature, largely due to the lack of use of quantitative indicators and integrated models. Here, we integrated indicators of human dependence on ES, of HWB, and of direct and indirect drivers of both using data from household surveys carried out at Wolong Nature Reserve, China. We examined how human dependence on ES and HWB might be affected by direct drivers, such as a natural disaster, and how human dependence on ES and direct and indirect drivers might affect HWB. Our results show that the direct driver (i.e., Wenchuan Earthquake) significantly affected both households’ dependence on ES and their well-being. Such impacts differed across various dimensions of ES and well-being as indicated by subindices. Those disadvantaged households with lower access to multiple forms of capital, more property damages, or larger revenue reductions also experienced greater losses in HWB. Diversifying human dependence on ES helps to mitigate disaster impacts on HWB. Our findings offer strong empirical evidence that the construction of quantitative indicators for ES and HWB, especially integrated models using them, is a viable approach for advancing the understanding of linkages between ES and HWB.",
"title": ""
},
{
"docid": "4958f0fbdf29085cabef3591a1c05c51",
"text": "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.",
"title": ""
},
{
"docid": "2839c318c9c2644edbd3e175bf9027b9",
"text": "Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-Depth (RGB-D) devices has led to many new approaches to MHT, and many of these integrate color and depth cues to improve each and every stage of the process. In this survey, we present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. We identify and introduce existing, publicly available, benchmark datasets and software resources that fuse color and depth data for MHT. Finally, we present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.",
"title": ""
},
{
"docid": "c3e46c3317d81b2d8b8c53f7e5cd37b9",
"text": "A novel rainfall prediction method has been proposed. In the present work rainfall prediction in Southern part of West Bengal (India) has been conducted. A two-step method has been employed. Greedy forward selection algorithm is used to reduce the feature set and to find the most promising features for rainfall prediction. First, in the training phase the data is clustered by applying k-means algorithm, then for each cluster a separate Neural Network (NN) is trained. The proposed two step prediction model (Hybrid Neural Network or HNN) has been compared with MLP-FFN classifier in terms of several statistical performance measuring metrics. The data for experimental purpose is collected by Dumdum meteorological station (West Bengal, India) over the period from 1989 to 1995. The experimental results have suggested a reasonable improvement over traditional methods in predicting rainfall. The proposed HNN model outperformed the compared models by achieving 84.26% accuracy without feature selection and 89.54% accuracy with feature selection.",
"title": ""
},
{
"docid": "18b5e00a3a49a2db85b3d8d0a68aac51",
"text": "Music classification is an interesting problem with many applications, from Drinkify (a program that generates cocktails to match the music) to Pandora to dynamically generating images that complement the music. However, music genre classification has been a challenging task in the field of music information retrieval (MIR). Music genres are hard to systematically and consistently describe due to their inherent subjective nature.",
"title": ""
},
{
"docid": "797ab17a7621f4eaa870a8eb24f8b94d",
"text": "A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25-132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50-180 cm.",
"title": ""
},
{
"docid": "b16e703ba92d5df1177a60c23ae7bc9e",
"text": "In this paper we investigate the role of the dependency tree in a named entity recognizer upon using a set of Graph Convolutional Networks (GCNs). We perform a comparison among different Named Entity Recognition (NER) architectures and show that the grammar of a sentence positively influences the results. Experiments on the OntoNotes 5.0 dataset demonstrate consistent performance improvements, without requiring heavy feature engineering nor additional language-specific knowledge.",
"title": ""
},
{
"docid": "13800973a4bc37f26319c0bb76fce731",
"text": "Light fields are a powerful concept in computational imaging and a mainstay in image-based rendering; however, so far their acquisition required either carefully designed and calibrated optical systems (micro-lens arrays), or multi-camera/multi-shot settings. Here, we show that fully calibrated light field data can be obtained from a single ordinary photograph taken through a partially wetted window. Each drop of water produces a distorted view on the scene, and the challenge of recovering the unknown mapping from pixel coordinates to refracted rays in space is a severely underconstrained problem. The key idea behind our solution is to combine ray tracing and low-level image analysis techniques (extraction of 2D drop contours and locations of scene features seen through drops) with state-of-the-art drop shape simulation and an iterative refinement scheme to enforce photo-consistency across features that are seen in multiple views. This novel approach not only recovers a dense pixel-to-ray mapping, but also the refractive geometry through which the scene is observed, to high accuracy. We therefore anticipate that our inherently self-calibrating scheme might also find applications in other fields, for instance in materials science where the wetting properties of liquids on surfaces are investigated.",
"title": ""
},
{
"docid": "c04b7bed0a27742349b9754006cba196",
"text": "27 28 29 30 31 32 33 34 35 36 37 38 Article history: Received 4 April 2014 Received in revised form 17 December 2014 Accepted 17 January 2015 Available online xxxx",
"title": ""
},
{
"docid": "087ca9ca531f14e8546c9f03d9e76ed3",
"text": "Deep generative models have shown promising results in generating realistic images, but it is still non-trivial to generate images with complicated structures. The main reason is that most of the current generative models fail to explore the structures in the images including spatial layout and semantic relations between objects. To address this issue, we propose a novel deep structured generative model which boosts generative adversarial networks (GANs) with the aid of structure information. In particular, the layout or structure of the scene is encoded by a stochastic and-or graph (sAOG), in which the terminal nodes represent single objects and edges represent relations between objects. With the sAOG appropriately harnessed, our model can successfully capture the intrinsic structure in the scenes and generate images of complicated scenes accordingly. Furthermore, a detection network is introduced to infer scene structures from a image. Experimental results demonstrate the effectiveness of our proposed method on both modeling the intrinsic structures, and generating realistic images.",
"title": ""
}
] |
scidocsrr
|
179b0c1625e9812b40a92407f2f562ee
|
Deep Shrinkage Convolutional Neural Network for Adaptive Noise Reduction
|
[
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
},
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "3392de95bfc0e16776550b2a0a8fa00e",
"text": "This paper presents a new type of three-phase voltage source inverter (VSI), called three-phase dual-buck inverter. The proposed inverter does not need dead time, and thus avoids the shoot-through problems of traditional VSIs, and leads to greatly enhanced system reliability. Though it is still a hard-switching inverter, the topology allows the use of power MOSFETs as the active devices instead of IGBTs typically employed by traditional hard-switching VSIs. As a result, the inverter has the benefit of lower switching loss, and it can be designed at higher switching frequency to reduce current ripple and the size of passive components. A unified pulsewidth modulation (PWM) is introduced to reduce computational burden in real-time implementation. Different PWM methods were applied to a three-phase dual-buck inverter, including sinusoidal PWM (SPWM), space vector PWM (SVPWM) and discontinuous space vector PWM (DSVPWM). A 2.5 kW prototype of a three-phase dual-buck inverter and its control system has been designed and tested under different dc bus voltage and modulation index conditions to verify the feasibility of the circuit, the effectiveness of the controller, and to compare the features of different PWMs. Efficiency measurement of different PWMs has been conducted, and the inverter sees peak efficiency of 98.8% with DSVPWM.",
"title": ""
},
{
"docid": "8b5f2d45852cf5c8e1edb6146d37abb7",
"text": "Portable, embedded systems place ever-increasing demands on high-performance, low-power microprocessor design. Dynamic voltage and frequency scaling (DVFS) is a well-known technique to reduce energy in digital systems, but the effectiveness of DVFS is hampered by slow voltage transitions that occur on the order of tens of microseconds. In addition, the recent trend towards chip-multiprocessors (CMP) executing multi-threaded workloads with heterogeneous behavior motivates the need for per-core DVFS control mechanisms. Voltage regulators that are integrated onto the same chip as the microprocessor core provide the benefit of both nanosecond-scale voltage switching and per-core voltage control. We show that these characteristics provide significant energy-saving opportunities compared to traditional off-chip regulators. However, the implementation of on-chip regulators presents many challenges including regulator efficiency and output voltage transient characteristics, which are significantly impacted by the system-level application of the regulator. In this paper, we describe and model these costs, and perform a comprehensive analysis of a CMP system with on-chip integrated regulators. We conclude that on-chip regulators can significantly improve DVFS effectiveness and lead to overall system energy savings in a CMP, but architects must carefully account for overheads and costs when designing next-generation DVFS systems and algorithms.",
"title": ""
},
{
"docid": "1c8a3500d9fbd7e6c10dfffc06157d74",
"text": "The issue of privacy protection in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we put forward a framework to assess the capacity of privacy protection solutions to hide distinguishing facial information and to conceal identity. We then conduct rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by privacy protection techniques. Results show the ineffectiveness of naïve privacy protection techniques such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.",
"title": ""
},
{
"docid": "5bc8c2bc2a0ac668c256ad802f191288",
"text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.",
"title": ""
},
{
"docid": "31bb1b7237951dbec124caf832401a43",
"text": "This Thesis is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. For more information, please contact pdxscholar@pdx.edu. Recommended Citation Petersen, Amanda Mae, \"Beyond Black and White: An Examination of Afrocentric Facial Features and Sex in Criminal Sentencing\" (2014). Dissertations and Theses. Paper 1855.",
"title": ""
},
{
"docid": "1994429bea369cf4f4395095789b3ec4",
"text": "Since Software-Defined Networking (SDN) gains popularity, mobile/wireless support is mentioned with importance to be handled as one of the crucial aspects in SDN. SDN introduces a centralized entity called SDN controller with the holistic view of the topology on the separated control/data plane architecture. Leveraging the features provided in the SDN controller, mobility management can be simply designed and lightweight, thus there is no need to define and rely on new mobility entities such as given in the traditional IP mobility management architectures. In this paper, we design and implement lightweight IPv6 mobility management in Open Network Operating System (ONOS) that is an open-source SDN control platform for service providers. For the lightweight mobility management, we implement the Neighbor Discovery Proxy (ND Proxy) function into the OpenFlow-enabled AP and switches, and ONOS controller module to handle the receiving ICMPv6 message and to send the unique home network prefix address to an IPv6 host. Thus this approach enables mobility management without bringing or integrating on traditional IP mobility protocols. The proposed idea was experimentally evaluated in the ONOS controller and Raspberry Pi based testbed, identifying the obtained handoff signaling latency is in the acceptable performance range.",
"title": ""
},
{
"docid": "3f5f3a31cbf45065ea82cf60140a8bf5",
"text": "This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6eca26209b9fcca8a9df76307108a3a8",
"text": "Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications",
"title": ""
},
{
"docid": "921b4ecaed69d7396285909bd53a3790",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "9c717907ec6af9a4edebae84e71ef3f1",
"text": "We study a model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty. We then show how the Bitcoin network can be used to achieve the above notion of fairness in the two-party as well as the multiparty setting (with a dishonest majority). In particular, we propose new ideal functionalities and protocols for fair secure computation and fair lottery in this model. One of our main contributions is the definition of an ideal primitive, which we call F CR (CR stands for “claim-or-refund”), that formalizes and abstracts the exact properties we require from the Bitcoin network to achieve our goals. Naturally, this abstraction allows us to design fair protocols in a hybrid model in which parties have access to the F CR functionality, and is otherwise independent of the Bitcoin ecosystem. We also show an efficient realization of F CR that requires only two Bitcoin transactions to be made on the network. Our constructions also enjoy high efficiency. In a multiparty setting, our protocols only require a constant number of calls to F CR per party on top of a standard multiparty secure computation protocol. Our fair multiparty lottery protocol improves over previous solutions which required a quadratic number of Bitcoin transactions.",
"title": ""
},
{
"docid": "dc546e170054e505842a510ca04dc137",
"text": "Machine learning (ML) and pattern matching (PM) are powerful computer science techniques which can derive knowledge from big data, and provide prediction and matching. Since nanometer VLSI design and manufacturing have extremely high complexity and gigantic data, there has been a surge recently in applying and adapting machine learning and pattern matching techniques in VLSI physical design (including physical verification), e.g., lithography hotspot detection and data/pattern-driven physical design, as ML and PM can raise the level of abstraction from detailed physics-based simulations and provide reasonably good quality-of-result. In this paper, we will discuss key techniques and recent results of machine learning and pattern matching, with their applications in physical design.",
"title": ""
},
{
"docid": "f372bc2ed27f5d4c08087ddc46e5373e",
"text": "This work investigates the practice of credit scoring and introduces the use of the clustered support vector machine (CSVM) for credit scorecard development. This recently designed algorithm addresses some of the limitations noted in the literature that is associated with traditional nonlinear support vector machine (SVM) based methods for classification. Specifically, it is well known that as historical credit scoring datasets get large, these nonlinear approaches while highly accurate become computationally expensive. Accordingly, this study compares the CSVM with other nonlinear SVM based techniques and shows that the CSVM can achieve comparable levels of classification performance while remaining relatively cheap computationally. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c5428f44292952bfb9443f61aa6d6ce0",
"text": "In this letter, a tunable protection switch device using open stubs for $X$ -band low-noise amplifiers (LNAs) is proposed. The protection switch is implemented using p-i-n diodes. As the parasitic inductance in the p-i-n diodes may degrade the protection performance, tunable open stubs are attached to these diodes to obtain a grounding effect. The performance is optimized for the desired frequency band by adjusting the lengths of the microstrip line open stubs. The designed LNA protection switch is fabricated and measured, and sufficient isolation is obtained for a 200 MHz operating band. The proposed protection switch is suitable for solid-state power amplifier radars in which the LNAs need to be protected from relatively long pulses.",
"title": ""
},
{
"docid": "4b6059e28e41e0e6da2b9ded26e88ae0",
"text": "NADPH oxidase (NOX) isoforms together have multiple functions that are important for normal physiology and have been implicated in the pathogenesis of a broad range of diseases, including atherosclerosis, cancer and neurodegenerative diseases. The phagocyte NADPH oxidase (NOX2) is critical for antimicrobial host defence. Chronic granulomatous disease (CGD) is an inherited disorder of NOX2 characterized by severe life-threatening bacterial and fungal infections and by excessive inflammation, including Crohn's-like inflammatory bowel disease (IBD). NOX2 defends against microbes through the direct antimicrobial activity of reactive oxidants and through activation of granular proteases and generation of neutrophil extracellular traps (NETs). NETosis involves the breakdown of cell membranes and extracellular release of chromatin and neutrophil granular constituents that target extracellular pathogens. Although the immediate effects of oxidant generation and NETosis are predicted to be injurious, NOX2, in several contexts, limits inflammation and injury by modulation of key signalling pathways that affect neutrophil accumulation and clearance. NOX2 also plays a role in antigen presentation and regulation of adaptive immunity. Specific NOX2-activated pathways such as nuclear factor erythroid 2-related factor 2 (Nrf2), a transcriptional factor that induces antioxidative and cytoprotective responses, may be important therapeutic targets for CGD and, more broadly, diseases associated with excessive inflammation and injury.",
"title": ""
},
{
"docid": "a383d9b392a58f6ba8a7192104e99600",
"text": "In this correspondence, we present a new universal entropy estimator for stationary ergodic sources, prove almost sure convergence, and establish an upper bound on the convergence rate for finite-alphabet finite memory sources. The algorithm is motivated by data compression using the Burrows-Wheeler block sorting transform (BWT). By exploiting the property that the BWT output sequence is close to a piecewise stationary memoryless source, we can segment the output sequence and estimate probabilities in each segment. Experimental results show that our algorithm outperforms Lempel-Ziv (LZ) string-matching-based algorithms.",
"title": ""
},
{
"docid": "0453d395af40160b4f66787bb9ac8e96",
"text": "Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering",
"title": ""
},
{
"docid": "88a41637c732aae49503bb8d94f1790a",
"text": "Different demographics, e.g., gender or age, can demonstrate substantial variation in their language use, particularly in informal contexts such as social media. In this paper we focus on learning gender differences in the use of subjective language in English, Spanish, and Russian Twitter data, and explore cross-cultural differences in emoticon and hashtag use for male and female users. We show that gender differences in subjective language can effectively be used to improve sentiment analysis, and in particular, polarity classification for Spanish and Russian. Our results show statistically significant relative F-measure improvement over the gender-independent baseline 1.5% and 1% for Russian, 2% and 0.5% for Spanish, and 2.5% and 5% for English for polarity and subjectivity classification.",
"title": ""
},
{
"docid": "00759cb892009cb002c3e1de9cb1bf7c",
"text": "Vehicles are currently being developed and sold with increasing levels of connectivity and automation. As with all networked computing devices, increased connectivity often results in a heightened risk of a cyber security attack. Furthermore, increased automation exacerbates any risk by increasing the opportunities for the adversary to implement a successful attack. In this paper, a large volume of publicly accessible literature is reviewed and compartmentalized based on the vulnerabilities identified and mitigation techniques developed. This review highlighted that the majority of studies are reactive and vulnerabilities are often discovered by friendly adversaries (white-hat hackers). Many gaps in the knowledge base were identified. Priority should be given to address these knowledge gaps to minimize future cyber security risks in the connected and autonomous vehicle sector.",
"title": ""
},
{
"docid": "e88deb80e033c43003f1db9967eb0ec6",
"text": "Previous RNN architectures have largely been superseded by LSTM, or “Long Short-Term Memory”. Since its introduction, there have been many variations on this simple design. However, it is still widely used and we are not aware of a gated-RNN architecture that outperforms LSTM in a broad sense while still being as simple and efficient. In this paper we propose a modified LSTM-like architecture. Our architecture is still simple and achieves better performance on the tasks that we tested on. We also introduce a new RNN performance benchmark that uses the handwritten digits and stresses several important network capabilities.",
"title": ""
},
{
"docid": "c175910d1809ad6dc073f79e4ca15c0c",
"text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.",
"title": ""
}
] |
scidocsrr
|
c7fe6f0d3ce5d6f4407df003df4ad95d
|
Deep Learning for Image Denoising: A Survey
|
[
{
"docid": "321abc49830c6d8c062087150f00532f",
"text": "In this paper, we propose an approach to learn hierarchical features for visual object tracking. First, we offline learn features robust to diverse motion patterns from auxiliary video sequences. The hierarchical features are learned via a two-layer convolutional neural network. Embedding the temporal slowness constraint in the stacked architecture makes the learned features robust to complicated motion transformations, which is important for visual object tracking. Then, given a target video sequence, we propose a domain adaptation module to online adapt the pre-learned features according to the specific target object. The adaptation is conducted in both layers of the deep feature learning module so as to include appearance information of the specific target object. As a result, the learned hierarchical features can be robust to both complicated motion transformations and appearance changes of target objects. We integrate our feature learning algorithm into three tracking methods. Experimental results demonstrate that significant improvement can be achieved using our learned hierarchical features, especially on video sequences with complicated motion transformations.",
"title": ""
},
{
"docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac",
"text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "fc9061348b46fc1bf7039fa5efcbcea1",
"text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.",
"title": ""
},
{
"docid": "fb80c27ab2615373a316605082adadbb",
"text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.",
"title": ""
},
{
"docid": "c6a23113b0e88c884eaddfba9cce2667",
"text": "Recent research in machine learning has focused on breaking audio spectrograms into separate sources of sound using latent variable decompositions. These methods require that the number of sources be specified in advance, which is not always possible. To address this problem, we develop Gamma Process Nonnegative Matrix Factorization (GaP-NMF), a Bayesian nonparametric approach to decomposing spectrograms. The assumptions behind GaP-NMF are based on research in signal processing regarding the expected distributions of spectrogram data, and GaP-NMF automatically discovers the number of latent sources. We derive a mean-field variational inference algorithm and evaluate GaP-NMF on both synthetic data and recorded music.",
"title": ""
},
{
"docid": "fac1eebdae6719224a6bd01785c72551",
"text": "Tolerance design has become a very sensitive and important issue in product and process development because of increasing demand for quality products and the growing requirements for automation in manufacturing. This chapter presents tolerance stack up analysis of dimensional and geometrical tolerances. The stack up of tolerances is important for functionality of the mechanical assembly as well as optimizing the cost of the system. Many industries are aware of the importance of geometrical dimensioning & Tolerancing (GDT) of their product design. Conventional methods of tolerance stack up analysis are tedious and time consuming. Stack up of geometrical tolerances is usually difficult as it involves application of numerous rules & conditions. This chapter introduces the various approaches viz. Generic Capsule, Quickie and Catena methods, used towards tolerance stack up analysis for geometrical tolerances. Automation of stack up of geometrical tolerances can be used for tolerance allocation on the components as well as their assemblies considering the functionality of the system. Stack of geometrical tolerances has been performed for individual components as well as assembly of these components.",
"title": ""
},
{
"docid": "d299f1ff3249a68b582494713e02a6bd",
"text": "We consider the Vehicle Routing Problem, in which a fixed fleet of delivery vehicles of uniform capacity must service known customer demands for a single commodity from a common depot at minimum transit cost. This difficult combinatorial problem contains both the Bin Packing Problem and the Traveling Salesman Problem (TSP) as special cases and conceptually lies at the intersection of these two well-studied problems. The capacity constraints of the integer programming formulation of this routing model provide the link between the underlying routing and packing structures. We describe a decomposition-based separation methodology for the capacity constraints that takes advantage of our ability to solve small instances of the TSP efficiently. Specifically, when standard procedures fail to separate a candidate point, we attempt to decompose it into a convex combination of TSP tours; if successful, the tours present in this decomposition are examined for violated capacity constraints; if not, the Farkas Theorem provides a hyperplane separating the point from the TSP polytope. We present some extensions of this basic concept and a general framework within which it can be applied to other combinatorial models. Computational results are given for an implementation within the parallel branch, cut, and price framework SYMPHONY.",
"title": ""
},
{
"docid": "368c91e483429b54989efea3a80fb370",
"text": "A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators. Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety & economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.",
"title": ""
},
{
"docid": "7401b3a6801b5c1349d961434ca69a3d",
"text": "developed out of a need to solve a problem. The problem was posed, in the late 1960s, to the Optical Sciences Center (OSC) at the University of Arizona by the US Air Force. They wanted to improve the images of satellites taken from earth. The earth's atmosphere limits the image quality and exposure time of stars and satellites taken with telescopes over 5 inches in diameter at low altitudes and 10 to 12 inches in diameter at high altitudes. Dr. Aden Mienel was director of the OSC at that time. He came up with the idea of enhancing images of satellites by measuring the Optical Transfer Function (OTF) of the atmosphere and dividing the OTF of the image by the OTF of the atmosphere. The trick was to measure the OTF of the atmosphere at the same time the image was taken and to control the exposure time so as to capture a snapshot of the atmospheric aberrations rather than to average over time. The measured wavefront error in the atmosphere should not change more than /10 over the exposure time. The exposure time for a low earth orbit satellite imaged from a mountaintop was determined to be about 1/60 second. Mienel was an astronomer and had used the standard Hartmann test (Fig 1), where large wooden or cardboard panels were placed over the aperture of a large telescope. The panels had an array of holes that would allow pencils of rays from stars to be traced through the telescope system. A photographic plate was placed inside and outside of focus, with a sufficient separation, so the pencil of rays would be separated from each other. Each hole in the panel would produce its own blurry image of the star. By taking two images a known distance apart and measuring the centroid of the images, one can trace the rays through the focal plane. Hartmann used these ray traces to calculate figures of merit for large telescopes. The data can also be used to make ray intercept curves (H'-tan U'). When Mienel could not cover the aperture while taking an image of the satellite, he came up with the idea of inserting a beam splitter in collimated space behind the eyepiece and placing a plate with holes in it at the image of the pupil. Each hole would pass a pencil of rays to a vidicon tube (this was before …",
"title": ""
},
{
"docid": "1e7c1dfe168aec2353b31613811112ae",
"text": "A great video title describes the most salient event compactly and captures the viewer’s attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset.",
"title": ""
},
{
"docid": "84d9b3e6e4b09515591fb20896b4fa43",
"text": "This paper describes the design and fabrication of low-cost coplanar waveguide (CPW) miniature meander inductors. Inductors are fabricated on a flexible plastic polyimide foil in ink-jet printed technology with silver nanoparticle ink in a single layer. For the first time, the detailed characterization and simulation of CPW inductors in this technology is reported. The inductors are developed with impressive measured self-resonance frequency up to 18.6 GHz. The 2.107-nH inductor measures only 1 mm × 1.7 mm × 0.075 mm and demonstrates a high level of miniaturization in ink-jet printing technology. The measured response characteristics are in excellent agreement with the predicted simulation response.",
"title": ""
},
{
"docid": "afbded5d6624b0b36e5072e3b16175b6",
"text": "The authors propose a method for embedding a multitone watermark using low computational complexity. The proposed approach can guard against reasonable cropping or print-and-scan attacks.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "c8d5ca95f6cd66461729cfc03772f5d0",
"text": "Statistical relationalmodels combine aspects of first-order logic andprobabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in themodel, and reasoning about groups of objects as awhole, lifted algorithms dramatically improve the run time of inference and learning. The thesis has five main contributions. First, we propose a new method for logical inference, called first-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for",
"title": ""
},
{
"docid": "f3e6330844e73edfd3f9c79c8ceaefc8",
"text": "BACKGROUND\nA number of surface scanning systems with the ability to quickly and easily obtain 3D digital representations of the foot are now commercially available. This review aims to present a summary of the reported use of these technologies in footwear development, the design of customised orthotics, and investigations for other ergonomic purposes related to the foot.\n\n\nMETHODS\nThe PubMed and ScienceDirect databases were searched. Reference lists and experts in the field were also consulted to identify additional articles. Studies in English which had 3D surface scanning of the foot as an integral element of their protocol were included in the review.\n\n\nRESULTS\nThirty-eight articles meeting the search criteria were included. Advantages and disadvantages of using 3D surface scanning systems are highlighted. A meta-analysis of studies using scanners to investigate the changes in foot dimensions during varying levels of weight bearing was carried out.\n\n\nCONCLUSIONS\nModern 3D surface scanning systems can obtain accurate and repeatable digital representations of the foot shape and have been successfully used in medical, ergonomic and footwear development applications. The increasing affordability of these systems presents opportunities for researchers investigating the foot and for manufacturers of foot related apparel and devices, particularly those interested in producing items that are customised to the individual. Suggestions are made for future areas of research and for the standardization of the protocols used to produce foot scans.",
"title": ""
},
{
"docid": "61f6fe08fd7c78f066438b6202dbe843",
"text": "State-of-charge (SOC) measures energy left in a battery, and it is critical for modeling and managing batteries. Developing efficient yet accurate SOC algorithms remains a challenging task. Most existing work uses regression based on a time-variant circuit model, which may be hard to converge and often does not apply to different types of batteries. Knowing open-circuit voltage (OCV) leads to SOC due to the well known mapping between OCV and SOC. In this paper, we propose an efficient yet accurate OCV algorithm that applies to all types of batteries. Using linear system analysis but without a circuit model, we calculate OCV based on the sampled terminal voltage and discharge current of the battery. Experiments show that our algorithm is numerically stable, robust to history dependent error, and obtains SOC with less than 4% error compared to a detailed battery simulation for a variety of batteries. Our OCV algorithm is also efficient, and can be used as a real-time electro-analytical tool revealing what is going on inside the battery.",
"title": ""
},
{
"docid": "a5e52fc842c9b1780282efc071d87b0e",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points and concepts are represented by regions in a (potentially) high-dimensional space. Based on our recent formalization, we present a comprehensive implementation of the conceptual spaces framework that is not only capable of representing concepts with inter-domain correlations, but that also offers a variety of operations on these concepts.",
"title": ""
},
{
"docid": "ee8a708913949db5dbdc43bea60fce37",
"text": "Sign language is the native language of deaf and hearing impaired people which they prefer to use on their daily life. Few interpreters are available to facilitate communication between deaf and vocal people. However, this is neither practical nor possible for all situations. Advances in information technology encouraged the development of systems that can facilitate the automatic translation between sign language and spoken language, and thus removing barriers facing the integration of deaf people in the society. A lot of research has been carried on the development of systems that translate sign languages into spoken words and the reverse. However, only recently systems translating between Arabic sign language and spoken language have been developed. Many signs of the Arabic sign language are reflection of the environment (White color in Arabic sign language is a finger pointing to the chest of the signer as the tradition for male is to wear white color dress). Several review papers have been published on the automatic recognition of other sign languages. This paper represents the first attempt to review systems and methods for the image based automatic recognition of the Arabic sign language. It reviews most published papers and discusses a variety of recognition methods. Additionally, the paper highlights the main challenges characterizing the Arabic sign language as well as potential future research directions in this area.",
"title": ""
},
{
"docid": "7228073bef61131c2efcdc736d90ca1b",
"text": "With the advent of word representations, word similarity tasks are becoming increasing popular as an evaluation metric for the quality of the representations. In this paper, we present manually annotated monolingual word similarity datasets of six Indian languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. These languages are most spoken Indian languages worldwide after Hindi and Bengali. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets.",
"title": ""
},
{
"docid": "eea45eb670d380e722f3148479a0864d",
"text": "In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs).",
"title": ""
},
{
"docid": "40229eb3a95ec25c1c3247edbcc22540",
"text": "The aim of this paper is the identification of a superordinate research framework for describing emerging IT-infrastructures within manufacturing, logistics and Supply Chain Management. This is in line with the thoughts and concepts of the Internet of Things (IoT), as well as with accompanying developments, namely the Internet of Services (IoS), Mobile Computing (MC), Big Data Analytics (BD) and Digital Social Networks (DSN). Furthermore, Cyber-Physical Systems (CPS) and their enabling technologies as a fundamental component of all these research streams receive particular attention. Besides of the development of an eponymous research framework, relevant applications against the background of the technological trends as well as potential areas of interest for future research, both raised from the economic practice's perspective, are identified.",
"title": ""
}
] |
scidocsrr
|
f7661318e188779d958d5f0c477c87c4
|
Machine Classification and Analysis of Suicide-Related Communication on Twitter
|
[
{
"docid": "63af822cd877b95be976f990b048f90c",
"text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well",
"title": ""
},
{
"docid": "964518240d0db37b6951617f3f2dc97b",
"text": "BACKGROUND\nThe media and the Internet may be having an influence on suicidal behavior. Online social networks such as Facebook represent a new facet of global information transfer. The impact of these online social networks on suicidal behavior has not yet been evaluated.\n\n\nAIMS\nTo discuss potential effects of suicide notes on Facebook on suicide prevention and copycat suicides, and to create awareness among health care professionals.\n\n\nMETHODS\nWe present a case involving a suicide note on Facebook and discuss potential consequences of this phenomenon based on literature found searching PubMed and Google.\n\n\nRESULTS\nThere are numerous reports of suicide notes on Facebook in the popular press, but none in the professional literature. Online social network users attempted to prevent planned suicides in several reported cases. To date there is no documented evidence of a copycat suicide, directly emulating a suicide announced on Facebook.\n\n\nCONCLUSIONS\nSuicide notes on online social networks may allow for suicide prevention via the immediate intervention of other network users. But it is not yet clear to what extent suicide notes on online social networks actually induce copycat suicides. These effects deserve future evaluation and research.",
"title": ""
}
] |
[
{
"docid": "20b63e40fb45e9b392b37ac9a4ac46a7",
"text": "Prior studies linking grit-defined as perseverance and passion for long-term goals-to performance are beset by contradictory evidence. As a result, commentators have increasingly declared that grit has limited effects. We propose that this inconsistent evidence has occurred because prior research has emphasized perseverance and ignored, both theoretically and empirically, the critical role of passion, which we define as a strong feeling toward a personally important value/preference that motivates intentions and behaviors to express that value/preference. We suggest that combining the grit scale-which only captures perseverance-with a measure that assesses whether individuals attain desired levels of passion will predict performance. We first metaanalyzed 127 studies (n = 45,485) that used the grit scale and assessed performance, and found that effect sizes are larger in studies where participants were more passionate for the performance domain. Second, in a survey of employees matched to supervisor-rated job performance (n = 422), we found that the combination of perseverance, measured through the grit scale, and passion attainment, measured through a new scale, predicted higher performance. A final study measured perseverance and passion attainment in a sample of students (n = 248) and linked these to their grade-point average (GPA), finding that the combination of perseverance and passion attainment predicted higher GPAs in part through increased immersion. The present results help resolve the mixed evidence of grit's relationship with performance by highlighting the important role that passion plays in predicting performance. By adequately measuring both perseverance and passion, the present research uncovers grit's true predictive power.",
"title": ""
},
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "e0ec22fcdc92abe141aeb3fa67e9e55a",
"text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack",
"title": ""
},
{
"docid": "11eec96ef8f0cd451ce7bb28866fbcda",
"text": "Acne vulgaris is one of the most common conditions for which all patients, including those with skin of color (Fitzpatrick skin types IV-VI), seek dermatological care. The multifactorial pathogenesis of acne appears to be the same in ethnic patients as in Caucasians. However, there is controversy over whether certain skin biology characteristics, such as sebum production, differ in ethnic patients. Clinically, acne lesions can appear the same as those seen in Caucasians; however, histologically, all types of acne lesions in African Americans can be associated with intense inflammation including comedones, which can also have some degree of inflammation. It is the sequelae of the disease that are the distinguishing characteristics of acne in skin of color, namely postinflammatory hyperpigmentation and keloidal or hypertrophic scarring. Although the medical and surgical treatment options are the same, it is these features that should be kept in mind when designing a treatment regimen for acne in skin of color.",
"title": ""
},
{
"docid": "c92a415048e3ba5f3836cb2ad952abe7",
"text": "Intersatellite links or crosslinks provide direct connectivity between two or more satellites, thus eliminating the need for intermediate ground stations when sending data. Intersatellite links have been considered for satellite constellation missions involving earth observation and communications. Historically, a large satellite system has needed an extremely high financial budget. However, the advent of the successful CubeSat platform allows for small satellites of less than one kilogram. This low-mass pico-satellite class platform could provide financially feasible support for large platform satellite constellations. This article surveys past and planned large intersatellite linking systems. Then, the article chronicles CubeSat communication subsystems used historically and in the near future. Finally, we examine the history of inter-networking protocols in space and open research issues with the goal of moving towards the next generation intersatellite-linking constellation supported by CubeSat platform satellites.",
"title": ""
},
{
"docid": "a4e57fe3d24d6eb8be1d0e0659dda58a",
"text": "Automated game design has remained a key challenge within the field of Game AI. In this paper, we introduce a method for recombining existing games to create new games through a process called conceptual expansion. Prior automated game design approaches have relied on hand-authored or crowdsourced knowledge, which limits the scope and applications of such systems. Our approach instead relies on machine learning to learn approximate representations of games. Our approach recombines knowledge from these learned representations to create new games via conceptual expansion. We evaluate this approach by demonstrating the ability for the system to recreate existing games. To the best of our knowledge, this represents the first machine learning-based automated game design system.",
"title": ""
},
{
"docid": "d880349c2760a8cd71d86ea3212ba1f0",
"text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.",
"title": ""
},
{
"docid": "606bc892776616ffd4f9f9dc44565019",
"text": "Despite the various attractive features that Cloud has to offer, the rate of Cloud migration is rather slow, primarily due to the serious security and privacy issues that exist in the paradigm. One of the main problems in this regard is that of authorization in the Cloud environment, which is the focus of our research. In this paper, we present a systematic analysis of the existing authorization solutions in Cloud and evaluate their effectiveness against well-established industrial standards that conform to the unique access control requirements in the domain. Our analysis can benefit organizations by helping them decide the best authorization technique for deployment in Cloud; a case study along with simulation results is also presented to illustrate the procedure of using our qualitative analysis for the selection of an appropriate technique, as per Cloud consumer requirements. From the results of this evaluation, we derive the general shortcomings of the extant access control techniques that are keeping them from providing successful authorization and, therefore, widely adopted by the Cloud community. To that end, we enumerate the features an ideal access control mechanisms for the Cloud should have, and combine them to suggest the ultimate solution to this major security challenge — access control as a service (ACaaS) for the software as a service (SaaS) layer. We conclude that a meticulous research is needed to incorporate the identified authorization features into a generic ACaaS framework that should be adequate for providing high level of extensibility and security by integrating multiple access control models.",
"title": ""
},
{
"docid": "b6df4868ee1496e581e8b76ca8fb165f",
"text": "Through AspectJ, aspect-oriented programming (AOP) is becoming of increasing interest and availability to Java programmers as it matures as a methodology for improved software modularity via the separation of cross-cutting concerns. AOP proponents often advocate a development strategy where Java programmers write the main application, ignoring cross-cutting concerns, and then AspectJ programmers, domain experts in their specific concerns, weave in the logic for these more specialized cross-cutting concerns. However, several authors have recently debated the merits of this strategy by empirically showing certain drawbacks. The proposed solutions paint a different development strategy where base code and aspect programmers are aware of each other (to varying degrees) and interactions between cross-cutting concerns are planned for early on.\n Herein we explore new possibilities in the language design space that open up when the base code is aware of cross-cutting aspects. Using our insights from this exploration we concretize these new possibilities by extending AspectJ with concise yet powerful constructs, while maintaining full backwards compatibility. These new constructs allow base code and aspects to cooperate in ways that were previously not possible: arbitrary blocks of code can be advised, advice can be explicitly parameterized, base code can guide aspects in where to apply advice, and aspects can statically enforce new constraints upon the base code that they advise. These new techniques allow aspect modularity and program safety to increase. We illustrate the value of our extensions through an example based on transactions.",
"title": ""
},
{
"docid": "c9275012c275a0288849e6eb8e7156c4",
"text": "Evaluation of patients with shoulder disorders often presents challenges. Among the most troublesome are revision surgery in patients with massive rotator cuff tear, atraumatic shoulder instability, revision arthroscopic stabilization surgery, adhesive capsulitis, and bicipital and subscapularis injuries. Determining functional status is critical before considering surgical options in the patient with massive rotator cuff tear. When nonsurgical treatment of atraumatic shoulder stability is not effective, inferior capsular shift is the treatment of choice. Arthroscopic revision of failed arthroscopic shoulder stabilization procedures may be undertaken when bone and tissue quality are good. Arthroscopic release is indicated when idiopathic adhesive capsulitis does not respond to nonsurgical treatment; however, results of both nonsurgical and surgical treatment of posttraumatic and postoperative adhesive capsulitis are often disappointing. Patients not motivated to perform the necessary postoperative therapy following subscapularis repair are best treated with arthroscopic débridement and biceps tenotomy.",
"title": ""
},
{
"docid": "dc817bc11276d76f8d97f67e4b1b2155",
"text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.",
"title": ""
},
{
"docid": "ba4860f970b966f482b6c68c63b4404d",
"text": "Systems for assessing and tutoring reading skills place unique requirements on underlying ASR technologies. Most responses to a “read out loud” task can be handled with a low perplexity language model, but the educational setting of the task calls for diagnostic measures beyond plain accuracy. Pearson developed an automatic assessment of oral reading fluency that was administered in the field to a large, diverse sample of American adults. Traditional N-gram methods for language modeling are not optimal for the special domain of reading tests because N-grams need too much data and do not produce as accurate recognition. An efficient rule-based language model implemented a set of linguistic rules learned from an archival body of transcriptions, using only the text of the new passage and no passage-specific training data. Results from operational data indicate that this rule-based language model can improve the accuracy of test results and produce useful diagnostic information.",
"title": ""
},
{
"docid": "2956ef98f020e0f17c36a69a890e21dc",
"text": "Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths.",
"title": ""
},
{
"docid": "576cfe04b1f39ff4dd3992ec95f85091",
"text": "With the explosive growth of smart IoT devices at the edge of the Internet, embedding sensors on mobile devices for massive data collection and collective environment sensing has been envisioned as a cost-effective solution for IoT applications. However, existing IoT platforms and framework rely on dedicated middleware for (semi-) centralized task dispatching, data storage and incentive provision. Consequently, they are usually expensive to deploy, have limited adaptability to diverse requirements, and face a series of data security and privacy issues. In this paper, we employ permissionless blockchains to construct a purely decentralized platform for data storage and trading in a wirelesspowered IoT crowdsensing system. In the system, IoT sensors use power wirelessly transferred from RF-energy beacons for data sensing and transmission to an access point. The data is then forwarded to the blockchain for distributed ledger services, i.e., data/transaction verification, recording, and maintenance. Due to coupled interference of wireless transmission and transaction fee incurred from blockchain’s distributed ledger services, rational sensors have to decide on their transmission rates to maximize individual utility. Thus, we formulate a noncooperative game model to analyze this competitive situation among the sensors. We provide the analytical condition for the existence of the Nash equilibrium as well as a series of insightful numerical results about the equilibrium strategies in the game.",
"title": ""
},
{
"docid": "e041d7f54e1298d4aa55edbfcbda71ad",
"text": "Charts are common graphic representation for scientific data in technical and business papers. We present a robust system for detecting and recognizing bar charts. The system includes three stages, preprocessing, detection and recognition. The kernel algorithm in detection is newly developed Modified Probabilistic Hough Transform algorithm for parallel lines clusters detection. The main algorithms in recognition are bar pattern reconstruction and text primitives grouping in the Hough space which are also original. The Experiments show the system can also recognize slant bar charts, or even hand-drawn charts.",
"title": ""
},
{
"docid": "19202b2802eef89ccb9e675a7417e02c",
"text": "Stitching videos captured by hand-held mobile cameras can essentially enhance entertainment experience of ordinary users. However, such videos usually contain heavy shakiness and large parallax, which are challenging to stitch. In this paper, we propose a novel approach of video stitching and stabilization for videos captured by mobile devices. The main component of our method is a unified video stitching and stabilization optimization that computes stitching and stabilization simultaneously rather than does each one individually. In this way, we can obtain the best stitching and stabilization results relative to each other without any bias to one of them. To make the optimization robust, we propose a method to identify background of input videos, and also common background of them. This allows us to apply our optimization on background regions only, which is the key to handle large parallax problem. Since stitching relies on feature matches between input videos, and there inevitably exist false matches, we thus propose a method to distinguish between right and false matches, and encapsulate the false match elimination scheme and our optimization into a loop, to prevent the optimization from being affected by bad feature matches. We test the proposed approach on videos that are causally captured by smartphones when walking along busy streets, and use stitching and stability scores to evaluate the produced panoramic videos quantitatively. Experiments on a diverse of examples show that our results are much better than (challenging cases) or at least on par with (simple cases) the results of previous approaches.",
"title": ""
},
{
"docid": "5626f7c767ae20c3b58d2e8fb2b93ba7",
"text": "The presentation starts with a philosophical discussion about computer vision in general. The aim is to put the scope of the book into its wider context, and to emphasize why the notion of scale is crucial when dealing with measured signals, such as image data. An overview of different approaches to multi-scale representation is presented, and a number of special properties of scale-space are pointed out. Then, it is shown how a mathematical theory can be formulated for describing image structures at different scales. By starting from a set of axioms imposed on the first stages of processing, it is possible to derive a set of canonical operators, which turn out to be derivatives of Gaussian kernels at different scales. The problem of applying this theory computationally is extensively treated. A scale-space theory is formulated for discrete signals, and it demonstrated how this representation can be used as a basis for expressing a large number of visual operations. Examples are smoothed derivatives in general, as well as different types of detectors for image features, such as edges, blobs, and junctions. In fact, the resulting scheme for feature detection induced by the presented theory is very simple, both conceptually and in terms of practical implementations. Typically, an object contains structures at many different scales, but locally it is not unusual that some of these \"stand out\" and seem to be more significant than others. A problem that we give special attention to concerns how to find such locally stable scales, or rather how to generate hypotheses about interesting structures for further processing. It is shown how the scale-space theory, based on a representation called the scale-space primal sketch, allows us to extract regions of interest from an image without prior information about what the image can be expected to contain. Such regions, combined with knowledge about the scales at which they occur constitute qualitative information, which can be used for guiding and simplifying other low-level processes. Experiments on different types of real and synthetic images demonstrate how the suggested approach can be used for different visual tasks, such as image segmentation, edge detection, junction detection, and focusof-attention. This work is complemented by a mathematical treatment showing how the behaviour of different types of image structures in scalespace can be analysed theoretically.",
"title": ""
},
{
"docid": "5447d3fe8ed886a8792a3d8d504eaf44",
"text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.",
"title": ""
},
{
"docid": "9198e035c77e8798462dd97426ed0e67",
"text": "In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than an RBM. We apply this technique to the task of polyphonic music generation.",
"title": ""
},
{
"docid": "1fe00a08e1eb2124d2608e1244228524",
"text": "A 6.4MS/s 13b ADC with a low-power background calibration for DAC mismatch and comparator offset errors is presented. Redundancy deals with DAC settling and facilitates calibration. A two-mode comparator and 0.3fF capacitors reduce power and area. The background calibration can directly detect the sign of the dynamic comparator offset error and the DAC mismatch errors and correct both of them simultaneously in a stepwise feedback loop. The calibration achieves 20dB spur reduction with little area and power overhead. The chip is implemented in 40nm CMOS and consumes 46μW from a 1V supply, and achieves 64.1dB SNDR and a FoM of 5.5 fJ/conversion-step at Nyquist.",
"title": ""
}
] |
scidocsrr
|
e8e1ae9c6b1abf315417dd5d71fc5399
|
Using clustering analysis to improve semi-supervised classification
|
[
{
"docid": "125655821a44bbce2646157c8465e345",
"text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.",
"title": ""
}
] |
[
{
"docid": "884625359b646dbfe86464f8bbac74c2",
"text": "Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.",
"title": ""
},
{
"docid": "d763cefd5d584405e1a6c8e32c371c0c",
"text": "Abstract: Whole world and administrators of Educational institutions’ in our country are concerned about regularity of student attendance. Student’s overall academic performance is affected by the student’s present in his institute. Mainly there are two conventional methods for attendance taking and they are by calling student nams or by taking student sign on paper. They both were more time consuming and inefficient. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance of presence. The paper reviews various computerized attendance management system. In this paper basic problem of student attendance management is defined which is traditionally taken manually by faculty. One alternative to make student attendance system automatic is provided by Computer Vision. In this paper we review the various computerized system which is being developed by using different techniques. Based on this review a new approach for student attendance recording and management is proposed to be used for various colleges or academic institutes.",
"title": ""
},
{
"docid": "ae6a02ee18e3599c65fb9db22706de44",
"text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own",
"title": ""
},
{
"docid": "88a282d44199d47f9694eaac8efee370",
"text": "The mobile data traffic is expected to grow beyond 1000 times by 2020 compared with it in 2010. In order to support 1000 times of capacity increase, improving spectrum efficiency is one of the important approaches. Meanwhile, in Long Term Evolution (LTE)-Advanced, small cell and hotspot are important scenarios for future network deployment to increase the capacity from the network density domain. Under such environment, the probability of high Signal to Interference plus Noise Ratio (SINR) region becomes larger which brings the possibility of introducing higher order modulation, i.e., 256 Quadrature Amplitude Modulation(QAM) to improve the spectrum efficiency. Channel quality indicator (CQI) table design is a key issue to support 256 QAM. In this paper, we investigate the feasibility of 256 QAM by SINR geometry and propose two methods on CQI table design to support the 256 QAM transmission. Simulation results show proposed methods can improve average user equipment (UE) throughput and cell center UE throughput with almost no loss on cell edge UE throughput.",
"title": ""
},
{
"docid": "76151ea99f24bb16f98bf7793f253002",
"text": "The banning in 2006 of the use of antibiotics as animal growth promoters in the European Union has increased demand from producers for alternative feed additives that can be used to improve animal production. This review gives an overview of the most common non-antibiotic feed additives already being used or that could potentially be used in ruminant nutrition. Probiotics, dicarboxylic acids, enzymes and plant-derived products including saponins, tannins and essential oils are presented. The known modes of action and effects of these additives on feed digestion and more especially on rumen fermentations are described. Their utility and limitations in field conditions for modern ruminant production systems and their compliance with the current legislation are also discussed.",
"title": ""
},
{
"docid": "3901c34c1efcdc7eadf15f43005a7980",
"text": "This paper presents substrate integrated waveguide (SIW) cavity resonator and SIW quasi-elliptic filter with low insertion loss and high selectivity based on LTCC for 60-GHz application. The SIW cavity resonators with high Q factor are suggested and applied for the filter having low insertion loss and high selectivity, the resulting Q value and the resonance frequency are 224.5 and 60.371 GHz, respectively. In the filter design, the quasi-elliptic filter using positive and negative couplings is applied to get high selectivity, the positive and negative couplings are evaluated by the holes between resonators and the slots on ground planes, respectively. The proposed SIW quasi-elliptic filter exhibits an insertion loss 1.77 dB at 60.8 GHz and the return loss is better than 10.2 dB with a 3-dB fractional bandwidth of 11 % (6.92 GHz). The measured insertion loss is lower and the selectivity is higher than those in previous works due to the effects of the high Q resonators and quasi-elliptic structure adapting the negative coupling. The overall size of the fabricated filter is 4.6 mm ×2.8 mm× 0.2 mm.",
"title": ""
},
{
"docid": "b4c5ddab0cb3e850273275843d1f264f",
"text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"title": ""
},
{
"docid": "308aad476d26d631b04b3f249c507c72",
"text": "In this paper, the design of a Proportional-Integral-Derivative (PID) controller for the cruise control system has been proposed. The cruise control system, which is a highly nonlinear, has been linearized around the equilibrium point. The controller has been designed for the linearized model, by taking the dominant pole concept in the closed loop characteristic equation. The PID controller parameters, i.e. proportional, integral and derivative parameters have been tuned using Genetic Algorithm (GA). In this study, the performance of the controller has been compared with that of the conventional PID, state space and Fuzzy logic based controller. The simulation output reveals the superiority of the proposed controller in terms of maximum overshoot, peak time, rise time, settling time and steady state error. The sensitivity and complementary sensitivity analysis show the robust behaviour of the system with output disturbance and high-frequency noise rejection qualities. As a scope of further research, fractional order and 2-dof PID controller will be designed for this cruise control system and the performance will be compared with this design.",
"title": ""
},
{
"docid": "cdb252ec09b2cca79e1d4efa11722bd3",
"text": "Energy efficient communication is a fundamental problem in wireless ad-hoc and sensor networks. In this paper, we explore the feasibility of a distributed beamforming approach to this problem, with a cluster of distributed transmitters emulating a centralized antenna array so as to transmit a common message signal coherently to a distant base station. The potential SNR gains from beamforming are well-known. However, realizing these gains requires synchronization of the individual carrier signals in phase and frequency. In this paper we show that a large fraction of the beamforming gains can be realised even with imperfect synchronization corresponding to phase errors with moderately large variance. We present a master-slave architecture where a designated master transmitter coordinates the synchronization of other (slave) transmitters for beamforming. We observe that the transmitters can achieve distributed beamforming with minimal coordination with the base station using channel reciprocity. Thus, inexpensive local coordination with a master transmitter makes the expensive communication with a distant base station receiver more efficient. However, the duplexing constraints of the wireless channel place a fundamental limitation on the achievable accuracy of synchronization. We present a stochastic analysis that demonstrates the robustness of beamforming gains with imperfect synchronization, and demonstrate a tradeoff between synchronization overhead and beamforming gains. We also present simulation results for the phase errors that validate the analysis",
"title": ""
},
{
"docid": "41f6ee103e1b08b5fdee315af123d35e",
"text": "In this paper, an interactive holographic system, realized with the aim of creating, exchanging, discussing and disseminating cultural heritage information, is presented. By using low-cost and off-the-shelf devices, the system provides the visitors with a 'floating' computer generated representation of a virtual cultural artefact that, unlike the real one, can be examined in detail through a touchless natural interface. The proposed system is realized in such a way that it can be easily placed in a cultural exhibition without requiring any structural intervention. As such, it could represent a useful instrument complementary to a museum visit thanks to its capacity both to convey different types of digital cultural information and especially to allow the visitor to become an active actor, able to enjoy different perspectives and all the details of the artefact sharing her/his experience with other visitors. The paper describes the system modules and the hardware design to physically realize the pyramid, and details the user interface composed of two main actions designed to obtain a simple exploration of a virtual cultural heritage artefact.",
"title": ""
},
{
"docid": "d4709802b49d580de00a1926c08b1ae2",
"text": "We present a novel fine-tuning algorithm in a deep hybrid architecture for semisupervised text classification. During each increment of the online learning process, the fine-tuning algorithm serves as a top-down mechanism for pseudo-jointly modifying model parameters following a bottom-up generative learning pass. The resulting model, trained under what we call the Bottom-Up-Top-Down learning algorithm, is shown to outperform a variety of competitive models and baselines trained across a wide range of splits between supervised and unsupervised training data.",
"title": ""
},
{
"docid": "45d60590eeb7983c5f449719e51dd628",
"text": "Directly adding the knowledge triples obtained from open information extraction systems into a knowledge base is often impractical due to a vocabulary gap between natural language (NL) expressions and knowledge base (KB) representation. This paper aims at learning to map relational phrases in triples from natural-language-like statement to knowledge base predicate format. We train a word representation model on a vector space and link each NL relational pattern to the semantically equivalent KB predicate. Our mapping result shows not only high quality, but also promising coverage on relational phrases compared to previous research.",
"title": ""
},
{
"docid": "a41d93e550dc3d06940c5b5571d728a2",
"text": "Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audit to detect and punish insiders who inappropriately access and disclose confidential information. Recent work on audit games models the strategic interaction between an auditor with a single audit resource and auditees as a Stackelberg game, augmenting associated well-studied security games with a configurable punishment parameter. We significantly generalize this audit game model to account for multiple audit resources where each resource is restricted to audit a subset of all potential violations, thus enabling application to practical auditing scenarios. We provide an FPTAS that computes an approximately optimal solution to the resulting non-convex optimization problem. The main technical novelty is in the design and correctness proof of an optimization transformation that enables the construction of this FPTAS. In addition, we experimentally demonstrate that this transformation significantly speeds up computation of solutions for a class of audit games and security games.",
"title": ""
},
{
"docid": "0081abb45db5d3e893ee1086d1680041",
"text": "`introduction Technologies are amplifying each other in a fusion of technologies across the physical digital and biological worlds. We are witnessing profound shifts across all industries market by the emergence of new business models, the disruption of incumbents and the reshaping of production, consumption, transportation and delivery systems. On the social front a paradigm shift is underway in how we work and communicate, as well as how we express, inform, and entertain our self. Decision makers are too often caught in traditional linear (non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.",
"title": ""
},
{
"docid": "4d9a4cb23ad4ac56a3fbfece57fb6647",
"text": "Gene therapy refers to a rapidly growing field of medicine in which genes are introduced into the body to treat or prevent diseases. Although a variety of methods can be used to deliver the genetic materials into the target cells and tissues, modified viral vectors represent one of the more common delivery routes because of its transduction efficiency for therapeutic genes. Since the introduction of gene therapy concept in the 1970s, the field has advanced considerably with notable clinical successes being demonstrated in many clinical indications in which no standard treatment options are currently available. It is anticipated that the clinical success the field observed in recent years can drive requirements for more scalable, robust, cost effective, and regulatory-compliant manufacturing processes. This review provides a brief overview of the current manufacturing technologies for viral vectors production, drawing attention to the common upstream and downstream production process platform that is applicable across various classes of viral vectors and their unique manufacturing challenges as compared to other biologics. In addition, a case study of an industry-scale cGMP production of an AAV-based gene therapy product performed at 2,000 L-scale is presented. The experience and lessons learned from this largest viral gene therapy vector production run conducted to date as discussed and highlighted in this review should contribute to future development of commercial viable scalable processes for vial gene therapies.",
"title": ""
},
{
"docid": "7399a8096f56c46a20715b9f223d05bf",
"text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches",
"title": ""
},
{
"docid": "6d5c4881bfbe64cfe48242c01bdb6b1b",
"text": "The study sought to investigate the impact of birth order on procrastination among college students in Eldoret town. The study sought to achieve the following objectives: (1) to find out the prevalence of procrastination among college students in Eldoret town, (2) to find out the relationship between birth order on procrastination among college students in Eldoret town, (3) to investigate the relationship between age and procrastination among college students in Eldoret town and, (4) to investigate the relationship between gender and procrastination among college students in Eldoret town. The study adopted the ex post facto design. This single survey study purposively recruited 20 firstborns, 20 middle children, and 20 last-borns, from the KIM school of management, Eldoret campus. The sample comprised 30 male and 30 female respondents. Data was collected using a questionnaire. Hypothesis testing was done using the chi-square test. The findings showed that a total of 33 (55%) felt that birth order did affect their motivation for doing things while 27 (45%) felt that it did not affect them. , it is apparent that those who procrastinated were 28 (46.7%) while those who did not procrastinate were 32 (53.3%). Out of the 28 who postponed things that they could do at that moment, 21 said they always did so while 7 postponed sometimes. A total of 35 respondents indicated that they often gave up on a task whenever it got difficult while 25 (41.7%) opined that they never did so. The study concluded that there is a statistically significant relationship between procrastination and the respondents’ birth position. An examination of the crosstabulation above shows that most of those who procrastinated were last borns and a few middle borns. Hypothesis testing result confirmed that there was a statistically significant association between procrastination and age of the respondents. Of the 28 respondents who procrastinated, 16 (57.1%) were female while 12 (42.9%) were males.",
"title": ""
},
{
"docid": "b278b9e532600ea1da8c19e07807d899",
"text": "Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network’s computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9.",
"title": ""
},
{
"docid": "1a65f28239e34c7040f48210ed64f6be",
"text": "Blockchain is emerging as a game changing technology in many industries. Although it is increasingly capturing the business community’s attention, a comprehensive overview of commercially available applications is lacking to date. This paper aims to fill this gap. Firstly, we propose a structured approach to assess the application landscape of blockchain technologies. To build our framework, we relied on largely accepted classifications of blockchains, based on protocols, consensus mechanisms and ownership, as well as on the most cited application areas emerging from the literature. Secondly, we applied the framework on a database of 460 released blockchains. The analysis confirms a dominance of applications for cryptocurrencies, financial transactions and certification purposes, with a prevalence of permissionless platforms. We also found new application fields that go far beyond the seven initial areas addressed by the current body of knowledge, leading to some interesting takeaways for both practitioners and IS researchers.",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] |
scidocsrr
|
40e840268444894bdeea978e966df2b4
|
Semi-Supervised Events Clustering in News Retrieval
|
[
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "1d32c84e539e10f99b92b54f2f71970b",
"text": "Stories are the most natural ways for people to deal with information about the changing world. They provide an efficient schematic structure to order and relate events according to some explanation. We describe (1) a formal model for representing storylines to handle streams of news and (2) a first implementation of a system that automatically extracts the ingredients of a storyline from news articles according to the model. Our model mimics the basic notions from narratology by adding bridging relations to timelines of events in relation to a climax point. We provide a method for defining the climax score of each event and the bridging relations between them. We generate a JSON structure for any set of news articles to represent the different stories they contain and visualize these stories on a timeline with climax and bridging relations. This visualization helps inspecting the validity of the generated structures.",
"title": ""
},
{
"docid": "8732cabe1c2dc0e8587b1a7e03039ef0",
"text": "With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. \n In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies <i>event threading</i>. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\n We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.",
"title": ""
}
] |
[
{
"docid": "1583ec87a3584e388139f0cba0fd0663",
"text": "Stephen Chan 1, Jeffrey Chu 1, Saralees Nadarajah 1,* and Joerg Osterrieder 2 1 School of Mathematics, University of Manchester, Manchester M13 9PL, UK; stephen.chan@manchester.ac.uk (S.C.); jeffrey.chu@manchester.ac.uk (J.C.) 2 School of Engineering, Zurich University of Applied Sciences, 8401 Winterthur, Switzerland; joerg.osterrieder@zhaw.ch * Correspondence: saralees.nadarjah@manchester.ac.uk; Tel.: +44-161-275-5912",
"title": ""
},
{
"docid": "f6647e82741dfe023ee5159bd6ac5be9",
"text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.",
"title": ""
},
{
"docid": "72555dce49865e6aa57574b5ce7d399b",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Solving the Periodic Timetabling Problem using a Genetic Algorithm Diego Arenas, Remy Chevirer, Said Hanafi, Joaquin Rodriguez",
"title": ""
},
{
"docid": "8025f3c59b1c82c4ed90261b6a3cbb0c",
"text": "In this work, we present SAMAR, a system for Subjectivity and Sentiment Analysis (SSA) for Arabic social media genres. We investigate: how to best represent lexical information; whether standard features are useful; how to treat Arabic dialects; and, whether genre specific features have a measurable impact on performance. Our results suggest that we need individualized solutions for each domain and task, but that lemmatization is a feature in all the best approaches.",
"title": ""
},
{
"docid": "3d12dea4ae76c5af54578262996fe0bb",
"text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.",
"title": ""
},
{
"docid": "c592f46ffd8286660b9e233127cefea7",
"text": "According to literature, penetration pricing is the dominant pricing strategy for network effect markets. In this paper we show that diffusion of products in a network effect market does not only vary with the set of pricing strategies chosen by competing vendors but also strongly depends on the topological structure of the customers' network. This stresses the inappropriateness of classical \"installed base\" models (abstracting from this structure). Our simulations show that although competitive prices tend to be significantly higher in close topology markets, they lead to lower total profit and lower concentration of vendors' profit in these markets.",
"title": ""
},
{
"docid": "e68c73806392d10c3c3fd262f6105924",
"text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "23defa6eb8e275b34f36638b1013db1e",
"text": "The major complication of neck dissection and surgery at the posterior triangle of the neck is the shoulder syndrome, which results from spinal accessory nerve injury. Erb’s point (the great auricular nerve) and the point where the spinal accessory nerve enters the trapezius muscle are used to identify the spinal accessory nerve in the posterior nerve triangle. Measurements were made during unilateral neck dissections in 30 patients to identify the relationship between the spinal accessory nerve and great auricular nerve and the distance between the entrance of the accessory nerve in the trapezious and clavicle. The distance between the spinal accessory nerve and Erb’s point was ranging from 0 to 3.8 cm (mean 1.53 cm). The distance between the spinal accessory nerve entering the trapezious muscle and the clavicle was between 2.5 and 7.3 cm (mean 4.8 cm). Since the great auricular nerve (Erb’s point) represents a constantly identifiable landmark, it allows simple and reliable identification of the course of the spinal accessory nerve. Also useful, but of secondary importance in our opinion, is identifying the nerve at the point where it enters the trapezius muscle.",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "06c8d56ecc9e92b106de01ad22c5a125",
"text": "Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.",
"title": ""
},
{
"docid": "7f6f39e46010238dca3da94f78a21add",
"text": "Labeling text data is quite time-consuming but essential for automatic text classification. Especially, manually creating multiple labels for each document may become impractical when a very large amount of data is needed for training multi-label text classifiers. To minimize the human-labeling efforts, we propose a novel multi-label active learning approach which can reduce the required labeled data without sacrificing the classification accuracy. Traditional active learning algorithms can only handle single-label problems, that is, each data is restricted to have one label. Our approach takes into account the multi-label information, and select the unlabeled data which can lead to the largest reduction of the expected model loss. Specifically, the model loss is approximated by the size of version space, and the reduction rate of the size of version space is optimized with Support Vector Machines (SVM). An effective label prediction method is designed to predict possible labels for each unlabeled data point, and the expected loss for multi-label data is approximated by summing up losses on all labels according to the most confident result of label prediction. Experiments on several real-world data sets (all are publicly available) demonstrate that our approach can obtain promising classification result with much fewer labeled data than state-of-the-art methods.",
"title": ""
},
{
"docid": "8b86b1a60595bc9557d796a3bf22772f",
"text": "Orchid plants are the members of Orchidaceae consisting of more than 25,000 species, which are distributed almost all over the world but more abundantly in the tropics. There are 177 genera, 1,125 species of orchids that originated in Thailand. Orchid plant collected from different nurseries showing Chlorotic and mosaic symptoms were observed on Vanda plants and it was suspected to infect with virus. So the symptomatic plants were tested for Cymbidium Mosaic Virus (CYMV), Odontoglossum ring spot virus (ORSV), Poty virus and Tomato Spotted Wilt Virus (TSWV) with Direct Antigen CoatingEnzyme Linked Immunosorbent Assay (DAC-ELISA) and further confirmed by Transmission Electron Microscopy (TEM). With the two methods CYMV and ORSV were detected positively from the suspected imported samples and low positive results were observed for Potex, Poty virus and Tomato Spotted Wilt Virus (TSWV).",
"title": ""
},
{
"docid": "337ba912e6c23324ba2e996808a4b060",
"text": "Comprehensive investigations were conducted on identifying integration efforts needed to adapt plasma dicing technology in BEOL pre-production environments. First, the authors identified the suitable process flows. Within the process flow, laser grooving before plasma dicing was shown to be a key unit process to control resulting die sidewall quality. Significant improvement on laser grooving quality has been demonstrated. Through these efforts, extremely narrow kerfs and near ideal dies strengths were achieved on bare Si dies. Plasma dicing process generates fluorinated polymer residues on both Si die sidewalls and under the topography overhangs on wafer surfaces, such as under the solder balls or microbumps. Certain areas cannot be cleaned by in-chamber post-treatments. Multiple cleaning methods demonstrated process capability and compatibility to singulated dies-on-tape handling. Lastly, although many methods exist commercially for backmetal and DAF separations, the authors' investigation is still inconclusive on one preferred process for post-plasma dicing die separations.",
"title": ""
},
{
"docid": "49158096fea4e317ac6e01e8ab9d0faf",
"text": "The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately “knows when it does not know,” and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.",
"title": ""
},
{
"docid": "f4639c2523687aa0d5bfdd840df9cfa4",
"text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.",
"title": ""
},
{
"docid": "63fa6565372b88315ccac15d6d8f0695",
"text": "This paper proposes a novel method for the prediction of stock market closing price. Many researchers have contributed in this area of chaotic forecast in their ways. Data mining techniques can be used more in financial markets to make qualitative decisions for investors. Fundamental and technical analyses are the traditional approaches so far. ANN is a popular way to identify unknown and hidden patterns in data is used for share market prediction. A multilayered feed-forward neural network is built by using combination of data and textual mining. The Neural Network is trained on the stock quotes and extracted key phrases using the Backpropagation Algorithm which is used to predict share market closing price. This paper is an attempt to determine whether the BSE market news in combination with the historical quotes can efficiently help in the calculation of the BSE closing index for a given trading day.",
"title": ""
},
{
"docid": "9eee83bc5d6a9918a003d48351df04db",
"text": "Buffer overflow attacks are known to be the most common type of attacks that allow attackers to hijack a remote system by sending a specially crafted packet to a vulnerable network application running on it. A comprehensive defense strategy against such attacks should include (1) an attack detection component that determines the fact that a program is compromised and prevents the attack from further propagation, (2) an attack identification component that identifies attack packets so that one can block such packets in the future, and (3) an attack repair component that restores the compromised application’s state to that before the attack and allows it to continue running normally. Over the last decade, a significant amount of research has been vested in the systems that can detect buffer overflow attacks either statically at compile time or dynamically at run time. However, not much effort is spent on automated attack packet identification or attack repair. In this paper we present a unified solution to the three problems mentioned above. We implemented this solution as a GCC compiler extension called DIRA that transforms a program’s source code so that the resulting program can automatically detect any buffer overflow attack against it, repair the memory damage left by the attack, and identify the actual attack packet(s). We used DIRA to compile several network applications with known vulnerabilities and tested DIRA’s effectiveness by attacking the transformed programs with publicly available exploit code. The DIRA-compiled programs were always able to detect the attacks, identify the attack packets and most often repair themselves to continue normal execution. The average run-time performance overhead for attack detection and attack repair/identification is 4% and 25% respectively.",
"title": ""
},
{
"docid": "241f33036b6b60e826da63d2b95dddac",
"text": "Technology changes have been acknowledged as a critical factor in determining competitiveness of organization. Under such environment, the right anticipation of technology change has been of huge importance in strategic planning. To monitor technology change, technology forecasting (TF) is frequently utilized. In academic perspective, TF has received great attention for a long time. However, few researches have been conducted to provide overview of the TF literature. Even though some studies deals with review of TF research, they generally focused on type and characteristics of various TF, so hardly provides information about patterns of TF research and which TF method is used in certain technology industry. Accordingly, this study profile developments in and patterns of scholarly research in TF over time. Also, this study investigates which technology industries have used certain TF method and identifies their relationships. This study will help in understanding TF research trend and their application area. Keywords—Technology forecasting, technology industry, TF trend, technology trajectory.",
"title": ""
},
{
"docid": "1a968e8cf7c35cc6ed36de0a8cccd9f0",
"text": "Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.",
"title": ""
},
{
"docid": "b6944672201e6351c8797d8f8253e88b",
"text": "Markov Logic Networks join probabilistic modeling with first-order logic and have been shown to integrate well with the Semantic Web foundations. While several approaches have been devised to tackle the subproblems of rule mining, grounding, and inference, no comprehensive workflow has been proposed so far. In this paper, we fill this gap by introducing a framework called MANDOLIN, which implements a workflow for knowledge discovery specifically on RDF datasets. Our framework imports knowledge from referenced graphs, creates similarity relationships among similar literals, and relies on state-of-the-art techniques for rule mining, grounding, and inference computation. We show that our best configuration scales well and achieves at least comparable results with respect to other statistical-relational-learning algorithms on link prediction.",
"title": ""
}
] |
scidocsrr
|
af1fab102399874c4db81ab4ba22d91d
|
Newer Understanding of Specific Anatomic Targets in the Aging Face as Applied to Injectables: Aging Changes in the Craniofacial Skeleton and Facial Ligaments.
|
[
{
"docid": "36f3596c64ba154e725abe5ed5cc43df",
"text": "In this article, which focuses on concepts rather than techniques, the author emphasizes that the best predictor of a good facelift outcome is an already attractive face that has good enough tissue quality to maintain a result past the swelling stage. The author notes that too often, surgeons gravitate toward a particular facial support technique and use it all the time, to often unsatisfactory results. He singles out different areas (the brows, the tear trough, the cheeks, and so forth) and shows how the addition of volume may give results better than traditional methods. As he points out, a less limited and ritualistic approach to the face seems to be how cosmetic surgery is evolving; all factors that might make a face better are reasonable to entertain.",
"title": ""
}
] |
[
{
"docid": "dc2770a8318dd4aa1142efebe5547039",
"text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.",
"title": ""
},
{
"docid": "7f65b9d7d07eee04405fc7102bd51f71",
"text": "Researchers tend to cite highly cited articles, but how these highly cited articles influence the citing articles has been underexplored. This paper investigates how one highly cited essay, Hirsch’s “h-index” article (H-article) published in 2005, has been cited by other articles. Content-based citation analysis is applied to trace the dynamics of the article’s impact changes from 2006 to 2014. The findings confirm that citation context captures the changing impact of the H-article over time in several ways. In the first two years, average citation mention of H-article increased, yet continued to decline with fluctuation until 2014. In contrast with citation mention, average citation count stayed the same. The distribution of citation location over time also indicates three phases of the H-article “Discussion,” “Reputation,” and “Adoption” we propose in this study. Based on their locations in the citing articles and their roles in different periods, topics of citation context shifted gradually when an increasing number of other articles were co-mentioned with the H-article in the same sentences. These outcomes show that the impact of the H-article manifests in various ways within the content of these citing articles that continued to shift in nine years, data that is not captured by traditional means of citation analysis that do not weigh citation impacts over time.",
"title": ""
},
{
"docid": "bdb4aba2b34731ffdf3989d6d1186270",
"text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.",
"title": ""
},
{
"docid": "4026a27bedea22a0115912cc1a384bf2",
"text": "This brief presents an ultralow-voltage multistage rectifier built with standard threshold CMOS for energy-harvesting applications. A threshold-compensated diode (TCD) is developed to minimize the forward voltage drop while maintaining low reverse leakage flow. In addition, an interstage compensation scheme is proposed that enables efficient power conversion at input amplitudes below the diode threshold. The new rectifier also features an inherent temperature and process compensation mechanism, which is achieved by precisely tracking the diode threshold by an auxiliary dummy. Although the design is optimized for an ac input at 13.56 MHz, the presented enhancement techniques are also applicable for low- or ultrahigh-frequency energy scavengers. The rectifier prototype is fabricated in a 0.35-μm four-metal two-poly standard CMOS process with the worst-case threshold voltage of 600 mV/- 780 mV for nMOS/pMOS, respectively. With a 13.56 MHz input of a 500 mV amplitude, the rectifier is able to deliver more than 35 μW at 2.5 V VDD, and the measured deviation in the output voltage is as low as 180 mV over 100°C for a cascade of ten TCDs.",
"title": ""
},
{
"docid": "4f90f6a836b775e1c7026bff7241a94e",
"text": "The Solar Shirt is a wearable computing design concept and demo in the area of sustainable and ecological design. The Solar Shirt showcases a concept, which detects the level of noise pollution in the wearer's environment and illustrates it with a garment-integrated display. In addition, the design concept utilizes printed electronic solar cells as part of the garment design, illustrating a design vision towards zero energy wearable computing. The Solar Shirt uses reindeer leather as its main material, giving a soft and luxurious feeling to the garment. The material selections and the style of the garment derive their inspiration from Arctic Design, reflecting the purity of nature and the simplicity and silence of a snowy world.",
"title": ""
},
{
"docid": "cd7210c8c9784bdf56fe72acb4f9e8e2",
"text": "Many-objective (four or more objectives) optimization problems pose a great challenge to the classical Pareto-dominance based multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and SPEA2. This is mainly due to the fact that the selection pressure based on Pareto-dominance degrades severely with the number of objectives increasing. Very recently, a reference-point based NSGA-II, referred as NSGA-III, is suggested to deal with many-objective problems, where the maintenance of diversity among population members is aided by supplying and adaptively updating a number of well-spread reference points. However, NSGA-III still relies on Pareto-dominance to push the population towards Pareto front (PF), leaving room for the improvement of its convergence ability. In this paper, an improved NSGA-III procedure, called θ-NSGA-III, is proposed, aiming to better tradeoff the convergence and diversity in many-objective optimization. In θ-NSGA-III, the non-dominated sorting scheme based on the proposed θ-dominance is employed to rank solutions in the environmental selection phase, which ensures both convergence and diversity. Computational experiments have shown that θ-NSGA-III is significantly better than the original NSGA-III and MOEA/D on most instances no matter in convergence and overall performance.",
"title": ""
},
{
"docid": "77278e6ba57e82c88f66bd9155b43a50",
"text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.",
"title": ""
},
{
"docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b",
"text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "48aea9478d2a9f1edb108202bd65e8dd",
"text": "The popularity of mobile devices and location-based services (LBSs) has raised significant concerns regarding the location privacy of their users. A popular approach to protect location privacy is anonymizing the users of LBS systems. In this paper, we introduce an information-theoretic notion for location privacy, which we call perfect location privacy. We then demonstrate how anonymization should be used by LBS systems to achieve the defined perfect location privacy. We study perfect location privacy under two models for user movements. First, we assume that a user’s current location is independent from her past locations. Using this independent identically distributed (i.i.d.) model, we show that if the pseudonym of the user is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{r-1}}}\\right)$ </tex-math></inline-formula> observations are made by the adversary for that user, then the user has perfect location privacy. Here, <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> is the number of the users in the network and <inline-formula> <tex-math notation=\"LaTeX\">$r$ </tex-math></inline-formula> is the number of all possible locations. Next, we model users’ movements using Markov chains to better model real-world movement patterns. We show that perfect location privacy is achievable for a user if the user’s pseudonym is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{|E|-r}}}\\right)$ </tex-math></inline-formula> observations are collected by the adversary for that user, where <inline-formula> <tex-math notation=\"LaTeX\">$|E|$ </tex-math></inline-formula> is the number of edges in the user’s Markov chain model.",
"title": ""
},
{
"docid": "0d6d2413cbaaef5354cf2bcfc06115df",
"text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.",
"title": ""
},
{
"docid": "6572c7d33fcb3f1930a41b4b15635ffe",
"text": "Neurons in area MT (V5) are selective for the direction of visual motion. In addition, many are selective for the motion of complex patterns independent of the orientation of their components, a behavior not seen in earlier visual areas. We show that the responses of MT cells can be captured by a linear-nonlinear model that operates not on the visual stimulus, but on the afferent responses of a population of nonlinear V1 cells. We fit this cascade model to responses of individual MT neurons and show that it robustly predicts the separately measured responses to gratings and plaids. The model captures the full range of pattern motion selectivity found in MT. Cells that signal pattern motion are distinguished by having convergent excitatory input from V1 cells with a wide range of preferred directions, strong motion opponent suppression and a tuned normalization that may reflect suppressive input from the surround of V1 cells.",
"title": ""
},
{
"docid": "38190bd8f531a7e165a3d786b4bd900c",
"text": "We define a second-order neural network stochastic gradient training algorithm whose block-diagonal structure effectively amounts to normalizing the unit activations. Investigating why this algorithm lacks in robustness then reveals two interesting insights. The first insight suggests a new way to scale the stepsizes, clarifying popular algorithms such as RMSProp as well as old neural network tricks such as fanin stepsize scaling. The second insight stresses the practical importance of dealing with fast changes of the curvature of the cost.",
"title": ""
},
{
"docid": "0a5ae1eb45404d6a42678e955c23116c",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "60fe0b363310d7407a705e3c1037aa15",
"text": "AIMS\nThe aim was to investigate the biosorption of chromium, nickel and iron from metallurgical effluents, produced by a steel foundry, using a strain of Aspergillus terreus immobilized in polyurethane foam.\n\n\nMETHODS AND RESULTS\nA. terreus UFMG-F01 was immobilized in polyurethane foam and subjected to biosorption tests with metallurgical effluents. Maximal metal uptake values of 164.5 mg g(-1) iron, 96.5 mg g(-1) chromium and 19.6 mg g(-1) nickel were attained in a culture medium containing 100% of effluent stream supplemented with 1% of glucose, after 6 d of incubation.\n\n\nCONCLUSIONS\nMicrobial populations in metal-polluted environments include fungi that have adapted to otherwise toxic concentrations of heavy metals and have become metal resistant. In this work, a strain of A. terreus was successfully used as a metal biosorbent for the treatment of metallurgical effluents.\n\n\nSIGNIFICANCE AND IMPACT OF THE STUDY\nA. terreus UFMG-F01 was shown to have good biosorption properties with respect to heavy metals. The low cost and simplicity of this technique make its use ideal for the treatment of effluents from steel foundries.",
"title": ""
},
{
"docid": "c68cfa9402dcc2a79e7ab2a7499cc683",
"text": "Stereo-pair images obtained from two cameras can be used to compute three-dimensional (3D) world coordinates of a point using triangulation. However, to apply this method, camera calibration parameters for each camera need to be experimentally obtained. Camera calibration is a rigorous experimental procedure in which typically 12 parameters are to be evaluated for each camera. The general camera model is often such that the system becomes nonlinear and requires good initial estimates to converge to a solution. We propose that, for stereo vision applications in which real-world coordinates are to be evaluated, arti® cial neural networks be used to train the system such that the need for camera calibration is eliminated. The training set for our neural network consists of a variety of stereo-pair images and corresponding 3D world coordinates. We present the results obtained on our prototype mobile robot that employs two cameras as its sole sensors and navigates through simple regular obstacles in a high-contrast environment. We observe that the percentage errors obtained from our set-up are comparable with those obtained through standard camera calibration techniques and that the system is accurate enough for most machine-vision applications.",
"title": ""
},
{
"docid": "1c6114188e01fb6c06c2ecdb1ced1565",
"text": "Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "6dcb885d26ca419925a094ade17a4cf7",
"text": "This paper presents two different Ku-Band Low-Profile antenna concepts for Mobile Satellite Communications. The antennas are based on low-cost hybrid mechanical-electronic steerable solutions but, while the first one allows a broadband reception of a satellite signal (Receive-only antenna concept), the second one provides transmit and receive functions for a bi-directional communication link between the satellite and the mobile user terminal (Transmit-Receive antenna). Both examples are suitable for integration in land vehicles and aircrafts.",
"title": ""
},
{
"docid": "601488a8e576d465a0bddd65a937c5c8",
"text": "Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.",
"title": ""
},
{
"docid": "b91cf13547266547b14e5520e3a12749",
"text": "The objective of this article is to review radio frequency identification (RFID) technology, its developments on RFID transponders, design and operating principles, so that end users can benefit from knowing which transponder meets their requirements. In this article, RFID system definition, RFID transponder architecture and RFID transponder classification based on a comprehensive literature review on the field of research are presented. Detailed descriptions of these tags are also presented, as well as an in-house developed semiactive tag in a compact package.",
"title": ""
}
] |
scidocsrr
|
d33e93a153dd2432237d19155e8f85b0
|
Effective Gaussian mixture learning for video background subtraction
|
[
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
}
] |
[
{
"docid": "6055957e5f48c5f82afcfa641176b759",
"text": "This article presents the design of a low cost fully active phased array antenna with specific emphasis on the realization of an elementary radiating cell. The phased array antenna is designed for mobile satellite services and dedicated for automotive applications. Details on the radiating element design as well as its implementation in a multilayer's build-up are presented and discussed. Results of the measurements and characterization of the elementary radiating cell are also presented and discussed. An outlook of the next steps in the antenna realization concludes this paper.",
"title": ""
},
{
"docid": "cf97c276a503968d849f45f4d1614bfd",
"text": "Social network platforms can archive data produced by their users. Then, the archived data is used to provide better services to the users. One of the services that these platforms provide is the recommendation service. Recommendation systems can predict the future preferences of users using various different techniques. One of the most popular technique for recommendation is matrix-factorization, which uses lowrank approximation of input data. Similarly, word embedding methods from natural language processing literature learn lowdimensional vector space representation of input elements. Noticing the similarities among word embedding and matrix factorization techniques and based on the previous works that apply techniques from text processing to recommendation, Word2Vec’s skip-gram technique is employed to make recommendations. The aim of this work is to make recommendation on next check-in venues. Unlike previous works that use Word2Vec for recommendation, in this work non-textual features are used. For the experiments, a Foursquare check-in dataset is used. The results show that use of vector space representations of items modeled by skip-gram technique is promising for making recommendations. Keywords—Recommendation systems, Location based social networks, Word embedding, Word2Vec, Skip-gram technique",
"title": ""
},
{
"docid": "5b110a3e51de3489168e7edca81b5f3e",
"text": "This paper is a review of research in product development, which we define as the transformation of a market opportunity into a product available for sale. Our review is broad, encompassing work in the academic fields of marketing, operations management, and engineering design. The value of this breadth is in conveying the shape of the entire research landscape. We focus on product development projects within a single firm. We also devote our attention to the development of physical goods, although much of the work we describe applies to products of all kinds. We look inside the “black box” of product development at the fundamental decisions that are made by intention or default. In doing so, we adopt the perspective of product development as a deliberate business process involving hundreds of decisions, many of which can be usefully supported by knowledge and tools. We contrast this approach to prior reviews of the literature, which tend to examine the importance of environmental and contextual variables, such as market growth rate, the competitive environment, or the level of top-management support. (Product Development Decisions; Survey; Literature Review)",
"title": ""
},
{
"docid": "adac9cbc59aea46821aaebad3bcc1772",
"text": "Multidetector computed tomography (MDCT) has emerged as an effective imaging technique to augment forensic autopsy. Postmortem change and decomposition are always present at autopsy and on postmortem MDCT because they begin to occur immediately upon death. Consequently, postmortem change and decomposition on postmortem MDCT should be recognized and not mistaken for a pathologic process or injury. Livor mortis increases the attenuation of vasculature and dependent tissues on MDCT. It may also produce a hematocrit effect with fluid levels in the large caliber blood vessels and cardiac chambers from dependent layering erythrocytes. Rigor mortis and algor mortis have no specific MDCT features. In contrast, decomposition through autolysis, putrefaction, and insect and animal predation produce dramatic alterations in the appearance of the body on MDCT. Autolysis alters the attenuation of organs. The most dramatic autolytic changes on MDCT are seen in the brain where cerebral sulci and ventricles are effaced and gray-white matter differentiation is lost almost immediately after death. Putrefaction produces a pattern of gas that begins with intravascular gas and proceeds to gaseous distension of all anatomic spaces, organs, and soft tissues. Knowledge of the spectrum of postmortem change and decomposition is an important component of postmortem MDCT interpretation.",
"title": ""
},
{
"docid": "49c1924821c326f803cefff58ca7ab67",
"text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.",
"title": ""
},
{
"docid": "699f4b29e480d89b158326ec4c778f7b",
"text": "Much attention is currently being paid in both the academic and practitioner literatures to the value that organisations could create through the use of big data and business analytics (Gillon et al, 2012; Mithas et al, 2013). For instance, Chen et al (2012, p. 1166–1168) suggest that business analytics and related technologies can help organisations to ‘better understand its business and markets’ and ‘leverage opportunities presented by abundant data and domain-specific analytics’. Similarly, LaValle et al (2011, p. 22) report that topperforming organisations ‘make decisions based on rigorous analysis at more than double the rate of lower performing organisations’ and that in such organisations analytic insight is being used to ‘guide both future strategies and day-to-day operations’. We argue here that while there is some evidence that investments in business analytics can create value, the thesis that ‘business analytics leads to value’ needs deeper analysis. In particular, we argue here that the roles of organisational decision-making processes, including resource allocation processes and resource orchestration processes (Helfat et al, 2007; Teece, 2009), need to be better understood in order to understand how organisations can create value from the use of business analytics. Specifically, we propose that the firstorder effects of business analytics are likely to be on decision-making processes and that improvements in organisational performance are likely to be an outcome of superior decision-making processes enabled by business analytics. This paper is set out as follows. Below, we identify prior research traditions in the Information Systems (IS) literature that discuss the potential of data and analytics to create value. This is to put into perspective the current excitement around ‘analytics’ and ‘big data’, and to position those topics within prior research traditions. We then draw on a number of existing literatures to develop a research agenda to understand the relationship between business analytics, decision-making processes and organisational performance. Finally, we discuss how the three papers in this Special Issue advance the research agenda. Disciplines Engineering | Science and Technology Studies Publication Details Sharma, R., Mithas, S. and Kankanhalli, A. (2014). Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations. European Journal of Information Systems, 23 (4), 433-441. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/3231 EJISEditorialFinal 16 May 2014 RS.docx 1 of 17",
"title": ""
},
{
"docid": "e13fc2c9f5aafc6c8eb1909592c07a70",
"text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].",
"title": ""
},
{
"docid": "a4c8e2938b976a37f38efc1ce5bc6286",
"text": "As a classic statistical model of 3D facial shape and texture, 3D Morphable Model (3DMM) is widely used in facial analysis, e.g., model fitting, image synthesis. Conventional 3DMM is learned from a set of well-controlled 2D face images with associated 3D face scans, and represented by two sets of PCA basis functions. Due to the type and amount of training data, as well as the linear bases, the representation power of 3DMM can be limited. To address these problems, this paper proposes an innovative framework to learn a nonlinear 3DMM model from a large set of unconstrained face images, without collecting 3D face scans. Specifically, given a face image as input, a network encoder estimates the projection, shape and texture parameters. Two decoders serve as the nonlinear 3DMM to map from the shape and texture parameters to the 3D shape and texture, respectively. With the projection parameter, 3D shape, and texture, a novel analytically-differentiable rendering layer is designed to reconstruct the original input face. The entire network is end-to-end trainable with only weak supervision. We demonstrate the superior representation power of our nonlinear 3DMM over its linear counterpart, and its contribution to face alignment and 3D reconstruction.",
"title": ""
},
{
"docid": "2ba69997f51aa61ffeccce33b2e69054",
"text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.",
"title": ""
},
{
"docid": "0b17e1cbfa3452ba2ff7c00f4e137aef",
"text": "Brain-computer interfaces (BCIs) promise to provide a novel access channel for assistive technologies, including augmentative and alternative communication (AAC) systems, to people with severe speech and physical impairments (SSPI). Research on the subject has been accelerating significantly in the last decade and the research community took great strides toward making BCI-AAC a practical reality to individuals with SSPI. Nevertheless, the end goal has still not been reached and there is much work to be done to produce real-world-worthy systems that can be comfortably, conveniently, and reliably used by individuals with SSPI with help from their families and care givers who will need to maintain, setup, and debug the systems at home. This paper reviews reports in the BCI field that aim at AAC as the application domain with a consideration on both technical and clinical aspects.",
"title": ""
},
{
"docid": "ee0c8eafd5804b215b34a443d95259d4",
"text": "Fog computing has emerged as a promising technology that can bring the cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, and how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud,” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes as building blocks of fog computing, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and show how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.",
"title": ""
},
{
"docid": "5aebbb08b705d98dbde9d3efe4affdf8",
"text": "The benefit of localized features within the regular domain has given rise to the use of Convolutional Neural Networks (CNNs) in machine learning, with great proficiency in the image classification. The use of CNNs becomes problematic within the irregular spatial domain due to design and convolution of a kernel filter being non-trivial. One solution to this problem is to utilize graph signal processing techniques and the convolution theorem to perform convolutions on the graph of the irregular domain to obtain feature map responses to learnt filters. We propose graph convolution and pooling operators analogous to those in the regular domain. We also provide gradient calculations on the input data and spectral filters, which allow for the deep learning of an irregular spatial domain problem. Signal filters take the form of spectral multipliers, applying convolution in the graph spectral domain. Applying smooth multipliers results in localized convolutions in the spatial domain, with smoother multipliers providing sharper feature maps. Algebraic Multigrid is presented as a graph pooling method, reducing the resolution of the graph through agglomeration of nodes between layers of the network. Evaluation of performance on the MNIST digit classification problem in both the regular and irregular domain is presented, with comparison drawn to standard CNN. The proposed graph CNN provides a deep learning method for the irregular domains present in the machine learning community, obtaining 94.23% on the regular grid, and 94.96% on a spatially irregular subsampled MNIST.",
"title": ""
},
{
"docid": "c57d9c4f62606e8fccef34ddd22edaec",
"text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.",
"title": ""
},
{
"docid": "e514e3fc0359332343e99fc95a0eda6f",
"text": "AIM\nTo evaluate the efficacy of the rehabilitation protocol on patients with lumbar degenerative disc disease after posterior transpedicular dynamic stabilization (PTDS) surgery.\n\n\nMATERIAL AND METHODS\nPatients (n=50) with single level lumbar degenerative disc disease were recruited for this study. Patients had PTDS surgery with hinged screws. A rehabilitation program was applied for all patients. Phase 1 was the preoperative evaluation phase. Phase 2 (active rest phase) was the first 6 weeks after surgery. During phase 3 (minimal movement phase, 6-12 weeks) pelvic tilt exercises initiated. In phase 4 (dynamic phase, 3-6 months) dynamic lumbar stabilization exercises were started. Phase 5 (return to sports phase) began after the 6th month. The primary outcome criteria were the Visual Analogue Pain Score (VAS) and the Oswestry Disability Index (ODI). Patients were evaluated preoperatively, postoperative 3rd, 12th and 24th months.\n\n\nRESULTS\nThe mean preoperative VAS and ODI scores were 7.52±0.97 and 60.96±8.74, respectively. During the 3rd month, VAS and ODI scores decreased to 2.62±1.05 and 26.2±7.93, respectively. VAS and ODI scores continued to decrease during the 12th month after surgery to 1.4±0.81 and 13.72±6.68, respectively. At the last follow-up (mean 34.1 months) the VAS and ODI scores were found to be 0.68±0.62 and 7.88±3.32, respectively. (p=0.0001).\n\n\nCONCLUSION\nThe protocol was designed for a postoperative rehabilitation program after PTDS surgery for patients with lumbar degenerative disc disease. The good outcomes are the result of a combination of very careful and restrictive patient selection, surgical technique, and the presented rehabilitation program.",
"title": ""
},
{
"docid": "fce8f5ee730e2bbb63f4d1ef003ce830",
"text": "In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.",
"title": ""
},
{
"docid": "774df4733d98b781f32222cf843ec381",
"text": "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P t = (X, f(X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.",
"title": ""
},
{
"docid": "6c8c21e7cc5a9cc88fa558d7917a81b2",
"text": "Recent engineering experiences with the Missile Defense Agency (MDA) Ballistic Missile Defense System (BMDS) highlight the need to analyze the BMDS System of Systems (SoS) including the numerous potential interactions between independently developed elements of the system. The term “interstitials” is used to define the domain of interfaces, interoperability, and integration between constituent systems in an SoS. The authors feel that this domain, at an SoS level, has received insufficient attention within systems engineering literature. The BMDS represents a challenging SoS case study as many of its initial elements were assembled from existing programs of record. The elements tend to perform as designed but their performance measures may not be consistent with the higher level SoS requirements. One of the BMDS challenges is interoperability, to focus the independent elements to interact in a number of ways, either subtle or overt, for a predictable and sustainable national capability. New capabilities desired by national leadership may involve modifications to kill chains, Command and Control (C2) constructs, improved coordination, and performance. These capabilities must be realized through modifications to programs of record and integration across elements of the system that have their own independent programmatic momentum. A challenge of SoS Engineering is to objectively evaluate competing solutions and assess the technical viability of tradeoff options. This paper will present a multifaceted technical approach for integrating a complex, adaptive SoS to achieve a functional capability. Architectural frameworks will be explored, a mathematical technique utilizing graph theory will be introduced, adjuncts to more traditional modeling and simulation techniques such as agent based modeling will be explored, and, finally, newly developed technical and managerial metrics to describe design maturity will be introduced. A theater BMDS construct will be used as a representative set of elements together with the *Author to whom all correspondence should be addressed (e-mail: DLGR_NSWC_G25@navy.mil; DLGR_NSWC_K@Navy.mil; DLGR_NSWC_W@navy.mil; DLGR_NSWC_W@Navy.mil). †Commanding Officer, 6149 Welsh Road, Suite 203, Dahlgren, VA 22448-5130",
"title": ""
},
{
"docid": "9a27c676b5d356d5feb91850e975a336",
"text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.",
"title": ""
},
{
"docid": "7564ec31bb4e81cc6f8bd9b2b262f5ca",
"text": "Traditional methods to calculate CRC suffer from diminishing returns. Doubling the data width doesn't double the maximum data throughput, the worst case timing path becomes slower. Feedback in the traditional implementation makes pipelining problematic. However, the on chip data width used for high throughput protocols is constantly increasing. The battle of reducing static power consumption is one factor driving this trend towards wider data paths. This paper discusses a method for pipelining the calculation of CRC's, such as ISO-3309 CRC32. This method allows independent scaling of circuit frequency and data throughput by varying the data width and the number of pipeline stages. Pipeline latency can be traded for area while slightly affecting timing. Additionally it allows calculation over data that isn't the full width of the input. This often happens at the end of the packet, although it could happen in the middle of the packet if data arrival is bursty. Finally, a fortunate side effect is that it offers the ability to efficiently update a known good CRC value where a small subset of data in the packet has changed. This is a function often desired in routers, for example updating the TTL field in IPv4 packets.",
"title": ""
}
] |
scidocsrr
|
d0bd58cff1d745287412050def6f07e6
|
Geographical and Energy Aware Routing : a recursive data dissemination protocol for wireless sensor networks
|
[
{
"docid": "0255ca668dee79af0cb314631cb5ab2d",
"text": "Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like marine biology, requires that these nodes be very small, light, un-tethered and unobtrusive, imposing substantial restrictions on the amount of additional hardware that can be placed at each node. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the use of GPS(Global Positioning System) for all nodes in these networks. The problem of localization, i.e., determining where a given node is physically located in a network is a challenging one, and yet extremely crucial for many applications of very large device networks. It needs to be solved in the absence of GPS on all the nodes in outdoor environments. In this paper, we propose a simple connectivity-metric based method for localization in outdoor environments that makes use of the inherent radiofrequency(RF) communications capabilities of these devices. A fixed number of reference points in the network transmit periodic beacon signals. Nodes use a simple connectivity metric to infer proximity to a given subset of these reference points and then localize themselves to the centroid of the latter. The accuracy of localization is then dependent on the separation distance between two adjacent reference points and the transmission range of these reference points. Initial experimental results show that the accuracy for 90% of our data points is within one-third of the separation distance. Keywords—localization, radio, wireless, GPS-less, connectivity, sensor networks.",
"title": ""
},
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
},
{
"docid": "ef5f1aa863cc1df76b5dc057f407c473",
"text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.",
"title": ""
}
] |
[
{
"docid": "7c9642705d402fe5dcbfac12bd35b393",
"text": "The idea of reserve against brain damage stems from the repeated observation that there does not appear to be a direct relationship between the degree of brain pathology or brain damage and the clinical manifestation of that damage. This paper attempts to develop a coherent theoretical account of reserve. One convenient subdivision of reserve models revolves around whether they envision reserve as a passive process, such as in brain reserve or threshold, or see the brain as actively attempting to cope with or compensate for pathology, as in cognitive reserve. Cognitive reserve may be based on more efficient utilization of brain networks or of enhanced ability to recruit alternate brain networks as needed. A distinction is suggested between reserve, the ability to optimize or maximize normal performance, and compensation, an attempt to maximize performance in the face of brain damage by using brain structures or networks not engaged when the brain is not damaged. Epidemiologic and imaging data that help to develop and support the concept of reserve are presented.",
"title": ""
},
{
"docid": "4a6c133bd060160537640180dfbb3d38",
"text": "OBJECTIVES\nThis study examined the relationship between child sexual abuse (CSA) and subsequent onset of psychiatric disorders, accounting for other childhood adversities, CSA type, and chronicity of the abuse.\n\n\nMETHODS\nRetrospective reports of CSA, other adversities, and psychiatric disorders were obtained by the National Comorbidity Survey, a nationally representative survey of the United States (n = 5877). Reports were analyzed by multivariate methods.\n\n\nRESULTS\nCSA was reported by 13.5% of women and 2.5% of men. When other childhood adversities were controlled for, significant associations were found between CSA and subsequent onset of 14 mood, anxiety, and substance use disorders among women and 5 among men. In a subsample of respondents reporting no other adversities, odds of depression and substance problems associated with CSA were higher. Among women, rape (vs molestation), knowing the perpetrator (vs strangers), and chronicity of CSA (vs isolated incidents) were associated with higher odds of some disorders.\n\n\nCONCLUSIONS\nCSA usually occurs as part of a larger syndrome of childhood adversities. Nonetheless, CSA, whether alone or in a larger adversity cluster, is associated with substantial increased risk of subsequent psychopathology.",
"title": ""
},
{
"docid": "60634774ba1800bf6fd8a89efb0550f6",
"text": "We study the problem of human body configuration analysis, more specifically, human parsing and human pose estimation. These two tasks, ie identifying the semantic regions and body joints respectively over the human body image, are intrinsically highly correlated. However, previous works generally solve these two problems separately or iteratively. In this work, we propose a unified framework for simultaneous human parsing and pose estimation based on semantic parts. By utilizing Parselets and Mixture of Joint-Group Templates as the representations for these semantic parts, we seamlessly formulate the human parsing and pose estimation problem jointly within a unified framework via a tailored and-or graph. A novel Grid Layout Feature is then designed to effectively capture the spatial co-occurrence/occlusion information between/within the Parselets and MJGTs. Thus the mutually complementary nature of these two tasks can be harnessed to boost the performance of each other. The resultant unified model can be solved using the structure learning framework in a principled way. Comprehensive evaluations on two benchmark datasets for both tasks demonstrate the effectiveness of the proposed framework when compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "636f01ed75f841f6094bd9df73f7cdeb",
"text": "A compact coplanar waveguide (CPW) fed dual band antenna is designed within a compact size of 21.57 × 25.62 × 1.6 mm3. A prototype antenna consists of an outer closed ring resonator, inner split ring structure, inner most closed ring and modified ground plane. The radiating element is employed to cover −10 dB impedance bandwidth of 110 MHz (2.29–2.4 GHz) and 2210 MHz (3.35–5.56 GHz) with center frequencies of 2.36 GHz and 4.45 GHz, respectively. A CPW-fed line with modified ground plane yields good impedance matching and broad bandwidth of the antenna. The band characteristics of metamaterial-inspired split ring structure as well as negative permeability (μ) are explained in detail. The prototype antenna has consistent Omnidirectional (H-Plane), and dipole (E-Plane) radiation pattern are obtained both resonant frequency, which is significantly useful for Industrial Scientific and Medical (ISM) radio band, Worldwide Interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN) applications.",
"title": ""
},
{
"docid": "505137d61a0087e054a2cf09c8addb4b",
"text": "A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs.",
"title": ""
},
{
"docid": "631d2c75377517fed1864e3a47ae873e",
"text": "Choi, Wiemer-Hastings, and Moore (2001) proposed to use Latent Semantic Analysis (LSA) to extract semantic knowledge from corpora in order to improve the accuracy of a text segmentation algorithm. By comparing the accuracy of the very same algorithm, depending on whether or not it takes into account complementary semantic knowledge, they were able to show the benefit derived from such knowledge. In their experiments, semantic knowledge was, however, acquired from a corpus containing the texts to be segmented in the test phase. If this hyper-specificity of the LSA corpus explains the largest part of the benefit, one may wonder if it is possible to use LSA to acquire generic semantic knowledge that can be used to segment new texts. The two experiments reported here show that the presence of the test materials in the LSA corpus has an important effect, but also that the generic semantic knowledge derived from large corpora clearly improves the segmentation accuracy.",
"title": ""
},
{
"docid": "ae7117416b4a07d2b15668c2c8ac46e3",
"text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.",
"title": ""
},
{
"docid": "23a329c63f9a778e3ec38c25fa59748a",
"text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.",
"title": ""
},
{
"docid": "9a7d21701b0c45bfe9d0ba7928266f50",
"text": "Increase in demand of electricity for entire applications in any country, need to produce consistently with advanced protection system. Many special protection systems are available based on volume of power distributed and often the load changes without prediction required an advanced and special communication based systems to control the electrical parameters of the generation. Most of the existing systems are reliable on various applications but not perfect for electrical applications. Electrical environment will have lots of disturbance in nature, Due to natural disasters like storms, cyclones or heavy rains transmission and distribution lines may lead to damage. The electrical wire may cut and fall on ground, this leads to very harmful for human beings and may become fatal. So, a rigid, reliable and robust communications like GSM technology instead of many communication techniques used earlier. This enhances speed of communication with distance independenncy. This technology saves human life from this electrical danger by providing the fault detection and automatically stops the electricity to the damaged line and also conveys the message to the electricity board to clear the fault. An Embedded based hardware design is developed and must acquire data from electrical sensing system. A powerful GSM networking is designed to send data from a network to other network. Any change in parameters of transmission is sensed to protect the entire transmission and distribution.",
"title": ""
},
{
"docid": "ce978eb9feae8cc996dd357a715e77ec",
"text": "This review begins with an overview of literature data on methodologies that have been applied in other studies to calibrate Activated Sludge Model No. 1 (ASM1). An attempt was made to gather and summarise the information needed to achieve a successful model calibration, and based on this a general model calibration procedure is proposed. The main part of the literature review is devoted to the different methods that have been developed and applied for the characterisation of wastewater and reaction kinetics in relation to ASM1. The methodologies are critically discussed and it is attempted to illustrate the power of the different methods for characterisation, all within the frame of ASM1 calibration. Finally, it is discussed which wastewater components and parameters are most relevant to be characterised via lab-scale experiments. This discussion also includes the problem of transferability between lab-scale and full-scale observations, and potentially different model concepts. One of the most discussed experimental factors determining the experimental response is the ratio between initial substrate and biomass concentration (S(0)/X(0)). A separate section is focusing upon this factor.",
"title": ""
},
{
"docid": "97c40f796f104587a465f5d719653181",
"text": "Although some theory suggests that it is impossible to increase one’s subjective well-being (SWB), our ‘sustainable happiness model’ (Lyubomirsky, Sheldon, & Schkade, 2005) specifies conditions under which this may be accomplished. To illustrate the three classes of predictor in the model, we first review research on the demographic/circumstantial, temperament/personality, and intentional/experiential correlates of SWB. We then introduce the sustainable happiness model, which suggests that changing one’s goals and activities in life is the best route to sustainable new SWB. However, the goals and activities must be of certain positive types, must fit one’s personality and needs, must be practiced diligently and successfully, must be varied in their timing and enactment, and must provide a continued stream of fresh positive experiences. Research supporting the model is reviewed, including new research suggesting that happiness intervention effects are not just placebo effects. Everyone wants to be happy. Indeed, happiness may be the ultimate fundamental ‘goal’ that people pursue in their lives (Diener, 2000), a pursuit enshrined as an inalienable right in the US Declaration of Independence. The question of what produces happiness and wellbeing is the subject of a great deal of contemporary research, much of it falling under the rubric of ‘positive psychology’, an emerging field that also considers issues such as what makes for optimal relationships, optimal group functioning, and optimal communities. In this article, we first review some prominent definitions, theories, and research findings in the well-being literature. We then focus in particular on the question of whether it is possible to become lastingly happier in one’s life, drawing from our recent model of sustainable happiness. Finally, we discuss some recent experimental data suggesting that it is indeed possible to boost one’s happiness level, and to sustain that newfound level. A number of possible definitions of happiness exist. Let us start with the three proposed by Ed Diener in his landmark Psychological Bulletin 130 Is It Possible to Become Happier © 2007 The Authors Social and Personality Psychology Compass 1/1 (2007): 129–145, 10.1111/j.1751-9004.2007.00002.x Journal Compilation © 2007 Blackwell Publishing Ltd (1984) article. The first is ‘leading a virtuous life’, in which the person adheres to society’s vision of morality and proper conduct. This definition makes no reference to the person’s feelings or emotions, instead apparently making the implicit assumption that reasonably positive feelings will ensue if the person toes the line. A second definition of happiness involves a cognitive evaluation of life as a whole. Are you content, overall, or would you do things differently given the opportunity? This reflects a personcentered view of happiness, and necessarily taps peoples’ subjective judgments of whether they are satisfied with their lives. A third definition refers to typical moods. Are you typically in a positive mood (i.e., inspired, pleased, excited) or a negative mood (i.e., anxious, upset, depressed)? In this person-centered view, it is the balance of positive to negative mood that matters (Bradburn, 1969). Although many other conceptions of well-being exist (Lyubomirsky & Lepper, 1999; Ryan & Frederick, 1997; Ryff & Singer, 1996), ratings of life satisfaction and judgments of the frequency of positive and negative affect have received the majority of the research attention, illustrating the dominance of the second and third (person-centered) definitions of happiness in the research literature. Notably, positive affect, negative affect, and life satisfaction are presumed to be somewhat distinct. Thus, although life satisfaction typically correlates positively with positive affect and negatively with negative affect, and positive affect typically correlates negatively with negative affect, these correlations are not necessarily strong (and they also vary depending on whether one assesses a particular time or context, or the person’s experience as a whole). The generally modest correlations among the three variables means that an individual high in one indicator is not necessarily high (or low) in any other indicator. For example, a person with many positive moods might also experience many negative moods, and a person with predominantly good moods may or may not be satisfied with his or her life. As a case in point, a college student who has many friends and rewarding social interactions may be experiencing frequent pleasant affect, but, if he doubts that college is the right choice for him, he will be discontent with life. In contrast, a person experiencing many negative moods might nevertheless be satisfied with her life, if she finds her life meaningful or is suffering for a good cause. For example, a frazzled new mother may feel that all her most cherished life goals are being realized, yet she is experiencing a great deal of negative emotions on a daily basis. Still, the three quantities typically go together to an extent such that a comprehensive and reliable subjective well-being (SWB) indicator can be computed by summing positive affect and life satisfaction and subtracting negative affect. Can we trust people’s self-reports of happiness (or unhappiness)? Actually, we must: It would make little sense to claim that a person is happy if he or she does not acknowledge being happy. Still, it is possible to corroborate self-reports of well-being with reports from the respondents’ friends and",
"title": ""
},
{
"docid": "3f512d295ae6f9483b87f9dafcc20b61",
"text": "Byzantine Fault Tolerant state machine replication (BFT) protocols are replication protocols that tolerate arbitrary faults of a fraction of the replicas. Although significant efforts have been recently made, existing BFT protocols do not provide acceptable performance when faults occur. As we show in this paper, this comes from the fact that all existing BFT protocols targeting high throughput use a special replica, called the primary, which indicates to other replicas the order in which requests should be processed. This primary can be smartly malicious and degrade the performance of the system without being detected by correct replicas. In this paper, we propose a new approach, called RBFT for Redundant-BFT: we execute multiple instances of the same BFT protocol, each with a primary replica executing on a different machine. All the instances order the requests, but only the requests ordered by one of the instances, called the master instance, are actually executed. The performance of the different instances is closely monitored, in order to check that the master instance provides adequate performance. If that is not the case, the primary replica of the master instance is considered malicious and replaced. We implemented RBFT and compared its performance to that of other existing robust protocols. Our evaluation shows that RBFT achieves similar performance as the most robust protocols when there is no failure and that, under faults, its maximum performance degradation is about 3%, whereas it is at least equal to 78% for existing protocols.",
"title": ""
},
{
"docid": "f782af034ef46a15d89637a43ad2849c",
"text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.",
"title": ""
},
{
"docid": "17a0dfece42274180e470f23e532880d",
"text": "Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.",
"title": ""
},
{
"docid": "bd18e4473cba642c5bea1bddc418f6c2",
"text": "This paper presents Smart Home concepts for Internet of Things (IoT) technologies that will make life at home more convenient. In this paper, we first describe the overall design of a low-cost Smart Refrigerator built with Raspberry Pi. Next, we explain two sensors controlling each camera, which are hooked up to our Rasberry Pi board. We further show how the user can use the Graphical User Interface (GUI) to interact with our system. With this Smart Home and Internet of Things technology, a user-friendly graphical user interface, prompt data synchronization among multiple devices, and real-time actual images captured from the refrigerator, our system can easily assist a family to reduce food waste.",
"title": ""
},
{
"docid": "8eace30c00d9b118635dc8a2e383f36b",
"text": "Wafer Level Packaging (WLP) has the highest potential for future single chip packages because the WLP is intrinsically a chip size package. The package is completed directly on the wafer then singulated by dicing for the assembly. All packaging and testing operations of the dice are replaced by whole wafer fabrication and wafer level testing. Therefore, it becomes more cost-effective with decreasing die size or increasing wafer size. However, due to the intrinsic mismatch of the coefficient of thermal expansion (CTE) between silicon chip and plastic PCB material, solder ball reliability subject to temperature cycling becomes the weakest point of the technology. In this paper some fundamental principles in designing WLP structure to achieve the robust reliability are demonstrated through a comprehensive study of a variety of WLP technologies. The first principle is the 'structural flexibility' principle. The more flexible a WLP structure is, the less the stresses that are applied on the solder balls will be. Ball on polymer WLP, Cu post WLP, polymer core solder balls are such examples to achieve better flexibility of overall WLP structure. The second principle is the 'local enhancement' at the interface region of solder balls where fatigue failures occur. Polymer collar WLP, and increasing solder opening size are examples to reduce the local stress level. In this paper, the reliability improvements are discussed through various existing and tested WLP technologies at silicon level and ball level, respectively. The fan-out wafer level packaging is introduced, which is expected to extend the standard WLP to the next stage with unlimited potential applications in future.",
"title": ""
},
{
"docid": "9d7df3f82d844ff74f438537bd2927b9",
"text": "Several approaches have previously been taken for identify ing document image skew. At issue are efficiency, accuracy, and robustness. We work dire ctly with the image, maximizing a function of the number of ON pixels in a scanline. Image rotat i n is simulated by either vertical shear or accumulation of pixel counts along sloped lines . Pixel sum differences on adjacent scanlines reduce isotropic background noise from non-text regions. To find the skew angle, a succession of values of this function are found. Angles are chosen hierarchically, typically with both a coarse sweep and a fine angular bifurcation. To inc rease efficiency, measurements are made on subsampled images that have been pre-filtered to m aximize sensitivity to image skew. Results are given for a large set of images, includi ng multiple and unaligned text columns, graphics and large area halftones. The measured in t insic angular error is inversely proportional to the number of sampling points on a scanline. This method does not indicate when text is upside-down, and i t also requires sampling the function at 90 degrees of rotation to measure text skew in lan dscape mode. However, such text orientation can be determined (as one of four direction s) by noting that roman characters in all languages have many more ascenders than descenders, a nd using morphological operations to identify such pixels. Only a small amount of text is r equired for accurate statistical determination of orientation, and images without text are i dentified as such.",
"title": ""
},
{
"docid": "32d79366936e301c44ae4ac11784e9d8",
"text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.",
"title": ""
},
{
"docid": "2b7c7162dbebc58958ea6f43ee7faf7b",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as \" active ingredients \" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher-and student-reports of garden engagement, administered to 310 middle school students, demonstrated multidimensional structures, good measurement properties, convergent validity, and the expected correlations with self-perceptions in the garden, garden learning , achievement, and engagement in science and school. Exploratory path analyses, calculated using multiple regression, provided initial support for the self-determination model of motivation: students' perceived autonomy, competence, and intrinsic motivation uniquely predicted their engagement in the garden, which in turn, predicted learning in the gardens and achievement in school. School gardens are flourishing. In the United States, they number in the tens of thousands, with 4,000 in California alone (California Department of Education, 2010). Goals for school gardens are as unique as the schools themselves, but in general they target four student outcomes: a) science learning and school achievement; b) ecological and environmental awareness and responsible behaviors; such as recycling and composting; d) knowledge about food systems 1. The Learning-Gardens Educational Assessment Group (or LEAG) is an interdisciplinary group of faculty and students from the Department of Psychology and the Graduate School of Education at Portland State University and the leadership of Lane Middle School of Portland Public Schools organized around a garden-based education program, the 17 and nutrition, and healthy eating, especially consumption of fresh fruits and vegetables; and d) positive youth development (Ratcliffe, Goldberg, Rogers, & Merrigan, 2010). Evidence about the beneficial effects of garden programs comes from qualitative and quantitative research and case studies from multiple disciplines …",
"title": ""
},
{
"docid": "94d2c88b11c79e2f4bf9fdc3ed8e1861",
"text": "The advent of pulsed power technology in the 1960s has enabled the development of very high peak power sources of electromagnetic radiation in the microwave and millimeter wave bands of the electromagnetic spectrum. Such sources have applications in plasma physics, particle acceleration techniques, fusion energy research, high-power radars, and communications, to name just a few. This article describes recent ongoing activity in this field in both Russia and the United States. The overview of research in Russia focuses on high-power microwave (HPM) sources that are powered using SINUS accelerators, which were developed at the Institute of High Current Electronics. The overview of research in the United States focuses more broadly on recent accomplishments of a multidisciplinary university research initiative on HPM sources, which also involved close interactions with Department of Defense laboratories and industry. HPM sources described in this article have generated peak powers exceeding several gigawatts in pulse durations typically on the order of 100 ns in frequencies ranging from about 1 GHz to many tens of gigahertz.",
"title": ""
}
] |
scidocsrr
|
b3dfdcd68e670e1cd12c68377cd6c61c
|
CREAM: A Concurrent-Refresh-Aware DRAM Memory architecture
|
[
{
"docid": "c73b65bced395eae228869186e254105",
"text": "Energy consumption has become a major constraint on the capabilities of computer systems. In large systems the energy consumed by Dynamic Random Access Memories (DRAM) is a significant part of the total energy consumption. It is possible to calculate the energy consumption of currently available DRAMs from their datasheets, but datasheets don’t allow extrapolation to future DRAM technologies and don’t show how other changes like increasing bandwidth requirements change DRAM energy consumption. This paper first presents a flexible DRAM power model which uses a description of DRAM architecture, technology and operation to calculate power usage and verifies it against datasheet values. Then the model is used together with assumptions about the DRAM roadmap to extrapolate DRAM energy consumption to future DRAM generations. Using this model we evaluate some of the proposed DRAM power reduction schemes.",
"title": ""
}
] |
[
{
"docid": "2c055f58323b6d771d716d9869acc64d",
"text": "Mobile Agricultural Robot Swarms (MARS) is an approach for autonomous farming operations by a coordinated group of robots. One key aspect of the MARS concept is the low individual intelligence, meaning that each robot is equipped with only a minimum of sensor technology in order to achieve a low cost and energy efficient system that provides scalability and reliability for field tasks. The robot swarms are coordinated by a centralized entity (OptiVisor) which is responsible for path planning, optimization and supervision. It also serves as a mediator between the robots and different cloud services responsible for the documentation of the procedure. This paper focuses on the architecture and function of OptiVisor within the overall MARS system. An OptiVisor in combination with a simulation environment for a robot swarm is presented and shows the feasibility of the general concept and the current state of the algorithms. Furthermore, the paper shows results about the current progress of OptiVisor integration using real robots.",
"title": ""
},
{
"docid": "efa566cdd4f5fa3cb12a775126377cb5",
"text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "12ace549005b810d02d91d218d017dec",
"text": "This article examined the long-term effects of multisystemic therapy (MST) vs. individual therapy (IT) on the prevention of criminal behavior and violent offending among 176 juvenile offenders at high risk for committing additional serious crimes. Results from multiagent, multimethod assessment batteries conducted before and after treatment showed that MST was more effective than IT in improving key family correlates of antisocial behavior and in ameliorating adjustment problems in individual family members. Moreover, results from a 4-year follow-up of rearrest data showed that MST was more effective than IT in preventing future criminal behavior, including violent offending. The implications of such findings for the design of violence prevention programs are discussed.",
"title": ""
},
{
"docid": "5aa7066ccb2915dd0e35dd7b92b464c8",
"text": "Many real-world relations can be represented by signed networks with positive and negative links, as a result of which signed network analysis has attracted increasing attention from multiple disciplines. With the increasing prevalence of social media networks, signed network analysis has evolved from developing and measuring theories to mining tasks. In this article, we present a review of mining signed networks in the context of social media and discuss some promising research directions and new frontiers. We begin by giving basic concepts and unique properties and principles of signed networks. Then we classify and review tasks of signed network mining with representative algorithms. We also delineate some tasks that have not been extensively studied with formal definitions and also propose research directions to expand the field of signed network mining.",
"title": ""
},
{
"docid": "c0636509e222bf844b76cf88e696a4bd",
"text": "The emerging Spin Torque Transfer memory (STT-RAM) is a promising candidate for future on-chip caches due to STT-RAM's high density, low leakage, long endurance and high access speed. However, one of the major challenges of STT-RAM is its high write current, which is disadvantageous when used as an on-chip cache since the dynamic power generated is too high.\n In this paper, we propose Early Write Termination (EWT), a novel technique to significantly reduce write energy with no performance penalty. EWT can be implemented with low complexity and low energy overhead. Our evaluation shows that up to 80% of write energy reduction can be achieved through EWT, resulting 33% less total energy consumption, and 34% reduction in ED2. These results indicate that EWT is an effective and practical scheme to improve the energy efficiency of a STT-RAM cache.",
"title": ""
},
{
"docid": "067fd264747d466b86710366c14a4495",
"text": "We present Embodied Construction Grammar, a formalism for linguist ic analysis designed specifically for integration into a simulation-based model of language unders tanding. As in other construction grammars, linguistic constructions serve to map between phonological f orms and conceptual representations. In the model we describe, however, conceptual representations are als o constrained to be grounded in the body’s perceptual and motor systems, and more precisely to parameteri ze m ntal simulations using those systems. Understanding an utterance thus involves at least two dis inct processes: analysis to determine which constructions the utterance instantiates, and simulationaccording to the parameters specified by those constructions. In this chapter, we outline a constru ction formalism that is both representationally adequate for these purposes and specified precisely enough for se in a computational architecture.",
"title": ""
},
{
"docid": "faa2940dca09a406dd5842c234f99d89",
"text": "Although extending the duration of ambulatory electrocardiographic monitoring beyond 24 to 48 hours can improve the detection of arrhythmias, lead-based (Holter) monitors might be limited by patient compliance and other factors. We, therefore, evaluated compliance, analyzable signal time, interval to arrhythmia detection, and diagnostic yield of the Zio Patch, a novel leadless, electrocardiographic monitoring device in 26,751 consecutive patients. The mean wear time was 7.6 ± 3.6 days, and the median analyzable time was 99% of the total wear time. Among the patients with detected arrhythmias (60.3% of all patients), 29.9% had their first arrhythmia and 51.1% had their first symptom-triggered arrhythmia occur after the initial 48-hour period. Compared with the first 48 hours of monitoring, the overall diagnostic yield was greater when data from the entire Zio Patch wear duration were included for any arrhythmia (62.2% vs 43.9%, p <0.0001) and for any symptomatic arrhythmia (9.7% vs 4.4%, p <0.0001). For paroxysmal atrial fibrillation (AF), the mean interval to the first detection of AF was inversely proportional to the total AF burden, with an increasing proportion occurring after 48 hours (11.2%, 10.5%, 20.8%, and 38.0% for an AF burden of 51% to 75%, 26% to 50%, 1% to 25%, and <1%, respectively). In conclusion, extended monitoring with the Zio Patch for ≤14 days is feasible, with high patient compliance, a high analyzable signal time, and an incremental diagnostic yield beyond 48 hours for all arrhythmia types. These findings could have significant implications for device selection, monitoring duration, and care pathways for arrhythmia evaluation and AF surveillance.",
"title": ""
},
{
"docid": "5eda080188512f8d3c5f882c1114e1c8",
"text": "Knowledge mapping is one of the most popular techniques used to identify knowledge in organizations. Using knowledge mapping techniques; a large and complex set of knowledge resources can be acquired and navigated more easily. Knowledge mapping has attracted the senior managers' attention as an assessment tool in recent years and is expected to measure deep conceptual understanding and allow experts in organizations to characterize relationships between concepts within a domain visually. Here the very critical issue is how to identify and choose an appropriate knowledge mapping technique. This paper aims to explore the different types of knowledge mapping techniques and give a general idea of their target contexts to have the way for choosing the appropriate map. It attempts to illustrate which techniques are appropriate, why and where they can be applied, and how these mapping techniques can be managed. The paper is based on the comprehensive review of papers on knowledge mapping techniques. In addition, this paper attempts to further clarify the differences among these knowledge mapping techniques and the main purpose for using each. Eventually, it is recommended that experts must understand the purpose for which the map is being developed before proceeding to activities related to any knowledge management dimensions; in order to the appropriate knowledge mapping technique .",
"title": ""
},
{
"docid": "80f0c53c19509c23f0a716abb623dc46",
"text": "The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present CODE2SEQ: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models. An online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.",
"title": ""
},
{
"docid": "b24a0f878f50d5b92d268e183fe62dde",
"text": "Management is the process of setting and achieving organizational goals through its functions: forecasting, organization, coordination, training and monitoring-evaluation.Leadership is: the ability to influence, to make others follow you, the ability to guide, the human side of business for \"teacher\". Interest in leadership increased during the early part of the twentieth century. Early leadership theories focused on what qualities distinguished between leaders and followers, while subsequent theories looked at other variables such as situational factors and skill levels. Other considerations emphasize aspects that separate management of leadership, calling them two completely different processes.The words manager and lider are very often used to designate the same person who leads, however, they represent different realities and the main difference arises form the way in which people around are motivated. The difference between being a manager and being a leader is simple. Management is a career. Leadership is a calling. A leader is someone who people naturally follow through their own choice, whereas a manager must be obeyed. A manager may only have obtained his position of authority through time and loyalty given to the company, not as a result of his leadership qualities. A leader may have no organisational skills, but his vision unites people behind him. Leadership and management are two notions that are often used interchangeably. However, these words actually describe two different concepts. Leadership is the main component of change, providing vision, and dedication necessary for its realization. Leadership is a skill that is formed by education, experiences, interaction with people and inspiring, of course, practice. Effective leadership depends largely on how their leaders define, follow and share the vision to followers. Leadership is just one important component of the directing function. A manager cannot just be a leader, he also needs formal authority to be effective.",
"title": ""
},
{
"docid": "00e5acdfb1e388b149bc729a7af108ee",
"text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574",
"title": ""
},
{
"docid": "4260f42ca0114a2f9e6b972004eb147e",
"text": "Steganography is the science of embedding confidential data inside data so they can be sent to destination without suspicion. Image-steganography is the most popular type of carrier to hold information. Many algorithms have been proposed to hide information into digital images. The least significant bit algorithm (LSB) is one of these algorithms that is widely used in steganography. Several improvements of this algorithm have been proposed recently. In this paper we study and analyze the LSB matching revisited (LSBMR) algorithm and the edge adaptive image steganography based on LSBMR. We conduct several experiments on these algorithms on a set of 200 images and we show that an improvement can be made by using some image processing technique called Sobel edge detection to find edges of the images that can hold the secret information. We show that the proposed technique can improve the quality of the steganography images where sharper edges are used for low capacity rate.",
"title": ""
},
{
"docid": "e540c8a31dc0cd7112e914f6e97f09a6",
"text": "This paper presents a new supervised method for vessel segmentation in retinal images. This method remolds the task of segmentation as a problem of cross-modality data transformation from retinal image to vessel map. A wide and deep neural network with strong induction ability is proposed to model the transformation, and an efficient training strategy is presented. Instead of a single label of the center pixel, the network can output the label map of all pixels for a given image patch. Our approach outperforms reported state-of-the-art methods in terms of sensitivity, specificity and accuracy. The result of cross-training evaluation indicates its robustness to the training set. The approach needs no artificially designed feature and no preprocessing step, reducing the impact of subjective factors. The proposed method has the potential for application in image diagnosis of ophthalmologic diseases, and it may provide a new, general, high-performance computing framework for image segmentation.",
"title": ""
},
{
"docid": "8cea62bdb8b4ce82a8b2d931ef20b0f2",
"text": "This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.",
"title": ""
},
{
"docid": "799ce416c16f9c50d03fbfc7f604ae06",
"text": "Security and privacy concerns hinder the adoption of cloud storage and computing in sensitive environments. We present a user-centric privacypreserving cryptographic access control protocol called K2C (Key To Cloud) that enables end-users to securely store, share, and manage their sensitive data in an untrusted cloud storage anonymously. K2C is scalable and supports the lazy revocation. It can be easily implemented on top of existing cloud services and APIs – we demonstrate its prototype based on Amazon S3 API. K2C is realized through our new cryptographic key-updating scheme, referred to as AB-HKU. The main advantage of the AB-HKU scheme is that it supports efficient delegation and revocation of privileges for hierarchies without requiring complex cryptographic data structures. We analyze the security and performance of our access control protocol, and provide an open source implementation. Two cryptographic libraries, Hierarchical Identity-Based Encryption and Key-Policy Attribute-Based Encryption, developed in this project are useful beyond the specific cloud security problem studied.",
"title": ""
},
{
"docid": "550a936ec02706a9de94a50abf6f1ac6",
"text": "Motivated by the capability of sparse coding based anomaly detection, we propose a Temporally-coherent Sparse Coding (TSC) where we enforce similar neighbouring frames be encoded with similar reconstruction coefficients. Then we map the TSC with a special type of stacked Recurrent Neural Network (sRNN). By taking advantage of sRNN in learning all parameters simultaneously, the nontrivial hyper-parameter selection to TSC can be avoided, meanwhile with a shallow sRNN, the reconstruction coefficients can be inferred within a forward pass, which reduces the computational cost for learning sparse coefficients. The contributions of this paper are two-fold: i) We propose a TSC, which can be mapped to a sRNN which facilitates the parameter optimization and accelerates the anomaly prediction. ii) We build a very large dataset which is even larger than the summation of all existing dataset for anomaly detection in terms of both the volume of data and the diversity of scenes. Extensive experiments on both a toy dataset and real datasets demonstrate that our TSC based and sRNN based method consistently outperform existing methods, which validates the effectiveness of our method.",
"title": ""
},
{
"docid": "ddcb206f6538cf5bd2804d12d65912df",
"text": "k-anonymity provides a measure of privacy protection by preventing re-identification of data to fewer than a group of k data items. While algorithms exist for producing k-anonymous data, the model has been that of a single source wanting to publish data. Due to privacy issues, it is common that data from different sites cannot be shared directly. Therefore, this paper presents a two-party framework along with an application that generates k-anonymous data from two vertically partitioned sources without disclosing data from one site to the other. The framework is privacy preserving in the sense that it satisfies the secure definition commonly defined in the literature of Secure Multiparty Computation.",
"title": ""
},
{
"docid": "74b538c7c8f22d9b10822dd303335528",
"text": "Context-aware recommender systems extend traditional recommender systems by adapting their output to users’ specific contextual situations. Most of the existing approaches to context-aware recommendation involve directly incorporating context into standard recommendation algorithms (e.g., collaborative filtering, matrix factorization). In this paper, we highlight the importance of context similarity and make the attempt to incorporate it into context-aware recommender. The underlying assumption behind is that the recommendation lists should be similar if their contextual situations are similar. We integrate context similarity with sparse linear recommendation model to build a similarity-learning model. Our experimental evaluation demonstrates that the proposed model is able to outperform several state-of-the-art context-aware recommendation algorithms for the top-N recommendation task.",
"title": ""
},
{
"docid": "e05b1b6e1ca160b06e36b784df30b312",
"text": "The vision of the MDSD is an era of software engineering where modelling completely replaces programming i.e. the systems are entirely generated from high-level models, each one specifying a different view of the same system. The MDSD can be seen as the new generation of visual programming languages which provides methods and tools to streamline the process of software engineering. Productivity of the development process is significantly improved by the MDSD approach and it also increases the quality of the resulting software system. The MDSD is particularly suited for those software applications which require highly specialized technical knowledge due to the involvement of complex technologies and the large number of complex and unmanageable standards. In this paper, an overview of the MDSD is presented; the working styles and the main concepts are illustrated in detail.",
"title": ""
}
] |
scidocsrr
|
440d2e15509653eb7dc3bbf4f0137b10
|
Attending to All Mention Pairs for Full Abstract Biological Relation Extraction
|
[
{
"docid": "a5b7253f56a487552ba3b0ce15332dd1",
"text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.",
"title": ""
}
] |
[
{
"docid": "d48053467e72a6a550de8cb66b005475",
"text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.",
"title": ""
},
{
"docid": "45b303fb40f120f87dd618855fa21871",
"text": "The relationship between business and society has witnessed a dramatic change in the past few years. Globalization, ethical consumerism, environmental concerns, strict government regulations, and growing strength of the civil society, are all factors that forced businesses to reconsider their role in society; accordingly there has been a surge of notions that tries to explain this new complex relation between business and society. This paper aims at accentuating this evolving relation by focusing on the concept of corporate social responsibility (CSR). It differentiates between CSR and other related concepts such as business ethics and corporate philanthropy. It analyzes the different arguments in the CSR debate, pinpoints mechanisms adopted by businesses in carrying out their social responsibilities, and concludes with the link between corporate social responsibility and sustainable development.",
"title": ""
},
{
"docid": "d8828a6cafcd918cd55b1782629b80e0",
"text": "For deep-neural-network (DNN) processors [1-4], the product-sum (PS) operation predominates the computational workload for both convolution (CNVL) and fully-connect (FCNL) neural-network (NN) layers. This hinders the adoption of DNN processors to on the edge artificial-intelligence (AI) devices, which require low-power, low-cost and fast inference. Binary DNNs [5-6] are used to reduce computation and hardware costs for AI edge devices; however, a memory bottleneck still remains. In Fig. 31.5.1 conventional PE arrays exploit parallelized computation, but suffer from inefficient single-row SRAM access to weights and intermediate data. Computing-in-memory (CIM) improves efficiency by enabling parallel computing, reducing memory accesses, and suppressing intermediate data. Nonetheless, three critical challenges remain (Fig. 31.5.2), particularly for FCNL. We overcome these problems by co-optimizing the circuits and the system. Recently, researches have been focusing on XNOR based binary-DNN structures [6]. Although they achieve a slightly higher accuracy, than other binary structures, they require a significant hardware cost (i.e. 8T-12T SRAM) to implement a CIM system. To further reduce the hardware cost, by using 6T SRAM to implement a CIM system, we employ binary DNN with 0/1-neuron and ±1-weight that was proposed in [7]. We implemented a 65nm 4Kb algorithm-dependent CIM-SRAM unit-macro and in-house binary DNN structure (focusing on FCNL with a simplified PE array), for cost-aware DNN AI edge processors. This resulted in the first binary-based CIM-SRAM macro with the fastest (2.3ns) PS operation, and the highest energy-efficiency (55.8TOPS/W) among reported CIM macros [3-4].",
"title": ""
},
{
"docid": "6001982cb50621fe488034d6475d1894",
"text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.",
"title": ""
},
{
"docid": "c926d9a6b6fe7654e8409ae855bdeb20",
"text": "A low-power, 40-Gb/s optical transceiver front-end is demonstrated in a 45-nm silicon-on-insulator (SOI) CMOS process. Both single-ended and differential optical modulators are demonstrated with floating-body transistors to reach output swings of more than 2 VPP and 4 VPP, respectively. A single-ended gain of 7.6 dB is measured over 33 GHz. The optical receiver consists of a transimpedance amplifier (TIA) and post-amplifier with 55 dB ·Ω of transimpedance over 30 GHz. The group-delay variation is ±3.9 ps over the 3-dB bandwidth and the average input-referred noise density is 20.5 pA/(√Hz) . The TIA consumes 9 mW from a 1-V supply for a transimpedance figure of merit of 1875 Ω /pJ. This represents the lowest power consumption for a transmitter and receiver operating at 40 Gb/s in a CMOS process.",
"title": ""
},
{
"docid": "09c19ae7eea50f269ee767ac6e67827b",
"text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.",
"title": ""
},
{
"docid": "4c1b42e12fd4f19870b5fc9e2f9a5f07",
"text": "Similar to face-to-face communication in daily life, more and more evidence suggests that human emotions also spread in online social media through virtual interactions. However, the mechanism underlying the emotion contagion, like whether different feelings spread unlikely or how the spread is coupled with the social network, is rarely investigated. Indeed, due to the costly expense and spatio-temporal limitations, it is challenging for conventional questionnaires or controlled experiments. While given the instinct of collecting natural affective responses of massive connected individuals, online social media offer an ideal proxy to tackle this issue from the perspective of computational social science. In this paper, based on the analysis of millions of tweets in Weibo, a Twitter-like service in China, we surprisingly find that anger is more contagious than joy, indicating that it can sparkle more angry follow-up tweets; and anger prefers weaker ties than joy for the dissemination in social network, indicating that it can penetrate different communities and break local traps by more sharings between strangers. Through a simple diffusion model, it is unraveled that easier contagion and weaker ties function cooperatively in speeding up anger’s spread, which is further testified by the diffusion of realistic bursty events with different dominant emotions. To our best knowledge, for the first time we quantificationally provide the long-term evidence to disclose the difference between joy and anger in dissemination mechanism and our findings would shed lights on personal anger management in human communication and collective outrage control in cyber space.",
"title": ""
},
{
"docid": "e8a1330f93a701939367bd390e9018c7",
"text": "An eccentric paddle locomotion mechanism based on the epicyclic gear mechanism (ePaddle-EGM), which was proposed to enhance the mobility of amphibious robots in multiterrain tasks, can perform various terrestrial and aquatic gaits. Two of the feasible aquatic gaits are the rotational paddling gait and the oscillating paddling gait. The former one has been studied in our previous work, and a capacity of generating vectored thrust has been found. In this letter, we focus on the oscillating paddling gait by measuring the generated thrusts of the gait on an ePaddle-EGM prototype module. Experimental results verify that the oscillating paddling gait can generate vectored thrust by changing the location of the paddle shaft as well. Furthermore, we compare the oscillating paddling gait with the rotational paddling gait at the vectored thrusting property, magnitude of the thrust, and the gait efficiency.",
"title": ""
},
{
"docid": "e8a9dffcb6c061fe720e7536387f5116",
"text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.",
"title": ""
},
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "ad09fcab0aac68007eac167cafdd3d3c",
"text": "We present HARP, a novel method for learning low dimensional embeddings of a graph’s nodes which preserves higherorder structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the stateof-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP’s hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on classification tasks on real-world graphs such as DBLP, BlogCatalog, and CiteSeer, where we achieve a performance gain over the original implementations by up to 14% Macro F1.",
"title": ""
},
{
"docid": "57edf07b135a073e5e780eabd0fd2bf8",
"text": "Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules of Boolean algebra. Here, we present its first probabilistic treatment. We facilitate scalable sampling-based posterior inference by exploitation of the combinatorial structure of the factor conditionals. Maximum a posteriori decompositions feature higher accuracies than existing techniques throughout a wide range of simulated conditions. Moreover, the probabilistic approach facilitates the treatment of missing data and enables model selection with much greater accuracy. We investigate three real-world data-sets. First, temporal interaction networks in a hospital ward and behavioural data of university students demonstrate the inference of instructive latent patterns. Next, we decompose a tensor with more than 10 billion data points, indicating relations of gene expression in cancer patients. Not only does this demonstrate scalability, it also provides an entirely novel perspective on relational properties of continuous data and, in the present example, on the molecular heterogeneity of cancer. Our implementation is available on GitHub2.",
"title": ""
},
{
"docid": "47b8daaaa43535ec29461f0d1b86566d",
"text": "This article aims to improve nurses' knowledge of wound debridement through a review of different techniques and the related physiology of wound healing. Debridement has long been an established component of effective wound management. However, recent clinical developments have widened the choice of methods available. This article provides an overview of the physiology of wounds, wound bed preparation, methods of debridement and the important considerations for the practitioner in implementing effective, informed and patient-centred wound care.",
"title": ""
},
{
"docid": "d55d212b64b76c94b1b93e39907ea06c",
"text": "The machine learning community has recently shown a lot of interest in practical probabilistic programming systems that target the problem of Bayesian inference. Such systems come in different forms, but they all express probabilistic models as computational processes using syntax resembling programming languages. In the functional programming community monads are known to offer a convenient and elegant abstraction for programming with probability distributions, but their use is often limited to very simple inference problems. We show that it is possible to use the monad abstraction for constructing probabilistic models, while still offering good performance of inference in challenging models. We use a GADT as an underlying representation of a probability distribution and apply Sequential Monte Carlo-based methods to achieve efficient inference. We define a formal semantics via measure theory and check the monad laws. We demonstrate a clean and elegant implementation that achieves performance comparable with Anglican, a state-of-the-art probabilistic programming system.",
"title": ""
},
{
"docid": "7d9b919720ad38107336fdf4c5977d4b",
"text": "Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.",
"title": ""
},
{
"docid": "1d0874e5fdb6635e07f08d59b113e57e",
"text": "In this paper a convolutional neural network is applied to the problem of note onset detection in audio recordings. Two time-frequency representations are analysed, showing the superiority of standard spectrogram over enhanced autocorrelation (EAC) used as the input to the convolutional network. Experimental evaluation is based on a dataset containing 10,939 annotated onsets, with total duration of the audio recordings of over 45 min.",
"title": ""
},
{
"docid": "80ea6a0b24c857c02ead9d10f3de0870",
"text": "Phishing is an attempt to acquire one's information without user's knowledge by tricking him by making similar kind of website or sending emails to user which looks like legitimate site or email. Phishing is a social cyber threat attack, which is causing severe loss of economy to the user, due to phishing attacks online transaction users are declining. This paper aims to design and implement a new technique to detect phishing web sites using Google's PageRank. Google gives a PageRank value to each site in the web. This work uses the PageRank value and other features to classify phishing sites from normal sites. We have collected a dataset of 100 phishing sites and 100 legitimate sites for our use. By using this Google PageRank technique 98% of the sites are correctly classified, showing only 0.02 false positive rate and 0.02 false negative rate.",
"title": ""
},
{
"docid": "59f0aead21fc5e0619893d5b5e161ebc",
"text": "The use of plastic materials in agriculture causes serious hazards to the environment. The introduction of biodegradable materials, which can be disposed directly into the soil can be one possible solution to this problem. In the present research results of experimental tests carried out on biodegradable film fabricated from natural waste (corn husk) are presented. The film was characterized by Fourier transform infrared spectroscopy (FTIR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA) and atomic force microscope (AFM) observation. The film is shown to be readily degraded within 7-9 months under controlled soil conditions, indicating a high biodegradability rate. The film fabricated was use to produce biodegradable pot (BioPot) for seedlings plantation. The introduction and the expanding use of biodegradable materials represent a really promising alternative for enhancing sustainable and environmentally friendly agricultural activities. Keywords—Environment, waste, plastic, biodegradable.",
"title": ""
},
{
"docid": "5baa9d48708a9be8275cd7e45a02fc5e",
"text": "The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.",
"title": ""
}
] |
scidocsrr
|
25baeb9ac4268c900d6541af2ab06415
|
Dynamic Data-Driven Estimation of Non-Parametric Choice Models
|
[
{
"docid": "772352a86880d517bbb6c1846e220a1e",
"text": "We discuss several state-of-the-art computationally cheap, as opposed to the polynomial time Interior Point algorithms, first order methods for minimizing convex objectives over “simple” large-scale feasible sets. Our emphasis is on the general situation of a nonsmooth convex objective represented by deterministic/stochastic First Order oracle and on the methods which, under favorable circumstances, exhibit (nearly) dimension-independent convergence rate.",
"title": ""
}
] |
[
{
"docid": "7d6fe97e8ca972f9db710743a916eb1c",
"text": "This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity). This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn’t explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.",
"title": ""
},
{
"docid": "e82c0826863ccd9cd647725fc00a2137",
"text": "Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.",
"title": ""
},
{
"docid": "7912241009e05de6af4e41aa2f48a1ec",
"text": "CONTEXT/OBJECTIVE\nNot much is known about the implication of adipokines and different cytokines in gestational diabetes mellitus (GDM) and macrosomia. The purpose of this study was to assess the profile of these hormones and cytokines in macrosomic babies, born to gestational diabetic women.\n\n\nDESIGN/SUBJECTS\nA total of 59 women (age, 19-42 yr) suffering from GDM with their macrosomic babies (4.35 +/- 0.06 kg) and 60 healthy age-matched pregnant women and their newborns (3.22 +/- 0.08 kg) were selected.\n\n\nMETHODS\nSerum adipokines (adiponectin and leptin) were quantified using an obesity-related multiple ELISA microarray kit. The concentrations of serum cytokines were determined by ELISA.\n\n\nRESULTS\nSerum adiponectin levels were decreased, whereas the concentrations of leptin, inflammatory cytokines, such as IL-6 and TNF-alpha, were significantly increased in gestational diabetic mothers compared with control women. The levels of these adipocytokines were diminished in macrosomic babies in comparison with their age-matched control newborns. Serum concentrations of T helper type 1 (Th1) cytokines (IL-2 and interferon-gamma) were decreased, whereas IL-10 levels were significantly enhanced in gestational diabetic mothers compared with control women. Macrosomic children exhibited high levels of Th1 cytokines and low levels of IL-10 compared with control infants. Serum IL-4 levels were not altered between gestational diabetic mothers and control mothers or the macrosomic babies and newborn control babies.\n\n\nCONCLUSIONS\nGDM is linked to the down-regulation of adiponectin along with Th1 cytokines and up-regulation of leptin and inflammatory cytokines. Macrosomia was associated with the up-regulation of Th1 cytokines and the down-regulation of the obesity-related agents (IL-6 and TNF-alpha, leptin, and adiponectin).",
"title": ""
},
{
"docid": "a53ab7039d47df6ee2f0de06ab069774",
"text": "Today's handheld mobile devices with advanced multimedia capabilities and wireless broadband connectivity have emerged as potential new tools for journalists to produce news articles. It is envisioned that they could enable faster, more authentic, and more efficient news production, and many large news producing organizations, including Reuters and BBC, have recently been experimenting with them. In this paper, we present a field study on using mobile devices to produce news articles. During the study, a group of 19 M.A.-level journalism students used the Mobile Journalist Toolkit, a lightweight set of tools for mobile journalist work built around the Nokia N82 camera phone, to produce an online news blog. Our results indicate that while the mobile device cannot completely replace the traditional tools, for some types of journalist tasks they provide major benefits over the traditional tools, and are thus a useful addition to the journalist's toolbox.",
"title": ""
},
{
"docid": "34c6a8fc3fed159b3eaa5e01158d1060",
"text": "Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing one of several blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information for new malicious websites. To tackle this problem, this paper proposes a new scheme for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URLs and DNS records. While the strings that form URLs or DNS records are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-byte strings. In this paper, a lightweight and scalable detection scheme that is based on machine learning techniques is developed and evaluated. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. The effectiveness of our approach is validated by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our scheme can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "a7f0f573b28b1fb82c3cba2d782e7d58",
"text": "This paper presents a meta-analysis of theory and research about writing and writing pedagogy, identifying six discourses – configurations of beliefs and practices in relation to the teaching of writing. It introduces and explains a framework for the analysis of educational data about writing pedagogy inwhich the connections are drawn across viewsof language, viewsofwriting, views of learning towrite,approaches to the teaching of writing, and approaches to the assessment of writing. The framework can be used for identifying discourses of writing in data such as policy documents, teaching and learning materials, recordings of pedagogic practice, interviews and focus groups with teachers and learners, and media coverage of literacy education. The paper also proposes that, while there are tensions and contradictions among these discourses, a comprehensive writing pedagogy might integrate teaching approaches from all six.",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "7f46fd1f61c3e0d158af401ee88a2586",
"text": "Sentiment analysis becomes a very active research area in the text mining field. It aims to extract people's opinions, sentiments, and subjectivity from the texts. Sentiment analysis can be performed at three levels: at document level, at sentence level and at aspect level. An important part of research effort focuses on document level sentiment classification, including works on opinion classification of reviews. This survey paper tackles a comprehensive overview of the last update of sentiment analysis at document level. The main target of this survey is to give nearly full image of sentiment analysis application, challenges and techniques at this level. In addition, some future research issues are also presented.",
"title": ""
},
{
"docid": "662fef280f2d03ae535bfbcc06f32810",
"text": "This paper describes a voiceless speech recognition technique that utilizes dynamic visual features to represent the facial movements during phonation. The dynamic features extracted from the mouth video are used to classify utterances without using the acoustic data. The audio signals of consonants are more confusing than vowels and the facial movements involved in pronunciation of consonants are more discernible. Thus, this paper focuses on identifying consonants using visual information. This paper adopts a visual speech model that categorizes utterances into sequences of smallest visually distinguishable units known as visemes. The viseme model used is based on the viseme model of Moving Picture Experts Group 4 (MPEG-4) standard. The facial movements are segmented from the video data using motion history images (MHI). MHI is a spatio-temporal template (grayscale image) generated from the video data using accumulative image subtraction technique. The proposed approach combines discrete stationary wavelet transform (SWT) and Zernike moments to extract rotation invariant features from the MHI. A feedforward multilayer perceptron (MLP) neural network is used to classify the features based on the patterns of visible facial movements. The preliminary experimental results indicate that the proposed technique is suitable for recognition of English consonants.",
"title": ""
},
{
"docid": "011f6529db0dc1dfed11033ed3786759",
"text": "Most modern face super-resolution methods resort to convolutional neural networks (CNN) to infer highresolution (HR) face images. When dealing with very low resolution (LR) images, the performance of these CNN based methods greatly degrades. Meanwhile, these methods tend to produce over-smoothed outputs and miss some textural details. To address these challenges, this paper presents a wavelet-based CNN approach that can ultra-resolve a very low resolution face image of 16 × 16 or smaller pixelsize to its larger version of multiple scaling factors (2×, 4×, 8× and even 16×) in a unified framework. Different from conventional CNN methods directly inferring HR images, our approach firstly learns to predict the LR’s corresponding series of HR’s wavelet coefficients before reconstructing HR images from them. To capture both global topology information and local texture details of human faces, we present a flexible and extensible convolutional neural network with three types of loss: wavelet prediction loss, texture loss and full-image loss. Extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution methods.",
"title": ""
},
{
"docid": "b00ce7fc3de34fcc31ada0f66042ef5e",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this secure broadcast communication in wired and wireless networks by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "1ebd58c4d2cf14b7a674ec64370694c7",
"text": "Tarlov cysts, which develop between the endoneurium and perineurium, are perineural cysts that are defined as cerebrospinal fluid (CSF)-filled saccular lesions commonly located in the extradural space of the sacral spinal canal. They are rare, showing up in 1.5% to 4.6% of patients receiving magnetic resonance imaging (MRI) for their lumbosacral symptoms, and only 1% or less of Tarlov cysts are considered to be symptomatic. Clinical manifestation of symptomatic Tarlov cyst is non-specific and can mimic other spinal disorders: localised pain, radiculopathy, weakness, sensory disturbance, and bladder and bowel dysfunction. Although surgical interventions are proven to be effective for treating Tarlov cyst, a conservative approach is clinically preferred to avoid invasive surgery. Some clinicians reported good results with the use of steroid therapy. To the best of my knowledge, this case report is the first of its kind to use a medical acupuncture approach to manage this condition.",
"title": ""
},
{
"docid": "00e5acdfb1e388b149bc729a7af108ee",
"text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574",
"title": ""
},
{
"docid": "a6e062620666a4f6e88373d746d4418c",
"text": "A method for fabricating planar implantable microelectrode arrays was demonstrated using a process that relied on ultra-thin silicon substrates, which ranged in thickness from 25 to 50 μm. The challenge of handling these fragile materials was met via a temporary substrate support mechanism. In order to compensate for putative electrical shielding of extracellular neuronal fields, separately addressable electrode arrays were defined on each side of the silicon device. Deep reactive ion etching was employed to create sharp implantable shafts with lengths of up to 5 mm. The devices were flip-chip bonded onto printed circuit boards (PCBs) by means of an anisotropic conductive adhesive film. This scalable assembly technique enabled three-dimensional (3D) integration through formation of stacks of multiple silicon and PCB layers. Simulations and measurements of microelectrode noise appear to suggest that low impedance surfaces, which could be formed by electrodeposition of gold or other materials, are required to ensure an optimal signal-to-noise ratio as well a low level of interchannel crosstalk. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "e7d334dbbfba465f49a924ff39ef0e1f",
"text": "Information security is important in proportion to an organization's dependence on information technology. When an organization's information is exposed to risk, the use of information security technology is obviously appropriate. Current information security technology, however, deals with only a small fraction of the problem of information risk. In fact, the evidence increasingly suggests that information security technology does not reduce information risk very effectively.This paper argues that we must reconsider our approach to information security from the ground up if we are to deal effectively with the problem of information risk, and proposes a new model inspired by the history of medicine.",
"title": ""
},
{
"docid": "36e72fe58858b4caf4860a3bba5fced4",
"text": "When operating over extended periods of time, an autonomous system will inevitably be faced with severe changes in the appearance of its environment. Coping with such changes is more and more in the focus of current robotics research. In this paper, we foster the development of robust place recognition algorithms in changing environments by describing a new dataset that was recorded during a 728 km long journey in spring, summer, fall, and winter. Approximately 40 hours of full-HD video cover extreme seasonal changes over almost 3000 km in both natural and man-made environments. Furthermore, accurate ground truth information are provided. To our knowledge, this is by far the largest SLAM dataset available at the moment. In addition, we introduce an open source Matlab implementation of the recently published SeqSLAM algorithm and make it available to the community. We benchmark SeqSLAM using the novel dataset and analyse the influence of important parameters and algorithmic steps.",
"title": ""
},
{
"docid": "bf5f3aedb8eadc7c9b12b6d670f93c49",
"text": "Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in of acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.",
"title": ""
},
{
"docid": "083989d115f6942b362c06936b2775ea",
"text": "In humans, nearly two meters of genomic material must be folded to fit inside each micrometer-scale cell nucleus while remaining accessible for gene transcription, DNA replication, and DNA repair. This fact highlights the need for mechanisms governing genome organization during any activity and to maintain the physical organization of chromosomes at all times. Insight into the functions and three-dimensional structures of genomes comes mostly from the application of visual techniques such as fluorescence in situ hybridization (FISH) and molecular approaches including chromosome conformation capture (3C) technologies. Recent developments in both types of approaches now offer the possibility of exploring the folded state of an entire genome and maybe even the identification of how complex molecular machines govern its shape. In this review, we present key methodologies used to study genome organization and discuss what they reveal about chromosome conformation as it relates to transcription regulation across genomic scales in mammals.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
}
] |
scidocsrr
|
fad3ea6cb8575f878b0d27d6f68a1c39
|
Object Detection in Optical Remote Sensing Images Based on Weakly Supervised Learning and High-Level Feature Learning
|
[
{
"docid": "28fd803428e8f40a4627e05a9464e97b",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
}
] |
[
{
"docid": "d8bf55d8a2aaa1061310f3d976a87c57",
"text": "characterized by four distinguishable interface styles, each lasting for many years and optimized to the hardware available at the time. In the first period, the early 1950s and 1960s, computers were used in batch mode with punched-card input and line-printer output; there were essentially no user interfaces because there were no interactive users (although some of us were privileged to be able to do console debugging using switches and lights as our “user interface”). The second period in the evolution of interfaces (early 1960s through early 1980s) was the era of timesharing on mainframes and minicomputers using mechanical or “glass” teletypes (alphanumeric displays), when for the first time users could interact with the computer by typing in commands with parameters. Note that this era persisted even through the age of personal microcomputers with such operating systems as DOS and Unix with their command line shells. During the 1970s, timesharing and manual command lines remained deeply entrenched, but at Xerox PARC the third age of user interfaces dawned. Raster graphics-based networked workstations and “pointand-click” WIMP GUIs (graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse) are the legacy of Xerox PARC that we’re still using today. WIMP GUIs were popularized by the Macintosh in 1984 and later copied by Windows on the PC and Motif on Unix workstations. Applications today have much the same look and feel as the early desktop applications (except for the increased “realism” achieved through the use of drop shadows for buttons and other UI widgets); the main advance lies in the shift from monochrome displays to color and in a large set of software-engineering tools for building WIMP interfaces. I find it rather surprising that the third generation of WIMP user interfaces has been so dominant for more than two decades; they are apparently sufficiently good for conventional desktop tasks that the field is stuck comfortably in a rut. I argue in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking t h e h u m a n c o n n e c t i o n Andries van Dam",
"title": ""
},
{
"docid": "bb5d8fe879b3b75dd7f3af56e51ff7b9",
"text": "Management of huge amount of data has always been a matter of concern. With the increase in awareness towards education, the amount of data in educational institutes is also increasing. The increasing growth of educational databases, have given rise to a new field of data mining, known as Educational Data Mining (EDM). With the help of this one can predict the academic performance of a student that can help the students, their instructors and also their guardians to take necessary actions beforehand to improve the future performance of a student. This paper deals with the implementation of ID3 decision tree algorithm to build a predictive model based on the previous performances of a student. The dataset used in this paper is the semester data of the students of a private institute of India. Rapidminer, an open source software platform is used to obtain the results.",
"title": ""
},
{
"docid": "39e550b269a66f31d467269c6389cde0",
"text": "The artificial intelligence community has seen a recent resurgence in the area of neural network study. Inspired by the workings of the brain and nervous system, neural networks have solved some persistent problems in vision and speech processing. However, the new systems may offer an alternative approach to decision-making via high level pattern recognition. This paper will describe the distinguishing features of neurally inspired systems, and present popular systems in a discrete-time, algorithmic framework. Examples of applications to decision problems will appear, and guidelines for their use in operations research will be established.",
"title": ""
},
{
"docid": "54f95cef02818cb4eb86339ee12a8b07",
"text": "The problem of discontinuities in broadband multisection coupled-stripline 3-dB directional couplers, phase shifters, high-pass tapered-line 3-dB directional couplers, and magic-T's, regarding the connections of coupled and terminating signal lines, is comprehensively investigated in this paper for the first time. The equivalent circuit of these discontinuities proposed in Part I has been used for accurate modeling of the broadband multisection and ultra-broadband high-pass coupled-stripline circuits. It has been shown that parasitic reactances, which result from the connections of signal and coupled lines, severely deteriorate the return losses and the isolation of such circuits and also-in case of tapered-line directional couplers-the coupling responses. Moreover, it has been proven theoretically and experimentally that these discontinuity effects can be substantially reduced by introducing compensating shunt capacitances in a number of cross sections of coupled and signal lines. Results of measurements carried out for various designed and manufactured coupled-line circuits have been very promising and have proven the efficiency of the proposed broadband compensation technique. The theoretical and measured data are given for the following coupled-stripline circuits: a decade-bandwidth asymmetric three-section 3-dB directional coupler, a decade-bandwidth three-section phase-shifter compensator, and a high-pass asymmetric tapered-line 3-dB coupler",
"title": ""
},
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "615d2f03b2ff975242e90103e98d70d3",
"text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.",
"title": ""
},
{
"docid": "21d1dedce1395899170c6cd573a9a154",
"text": "This paper explores the ability of theoretically-based asset pricing models such as the CAPM and the consumption CAPM referred to jointly as the (C)CAPM to explain the cross-section of average stock returns. Unlike many previous empirical tests of the (C)CAPM, we specify the pricing kernel as a conditional linear factor model, as would be expected if risk premia vary over time. Central to our approach is the use of a conditioning variable which proxies for fluctuations in the log consumption-aggregate wealth ratio and is likely to be important for summarizing conditional expectations of excess returns. We demonstrate that such conditional factor models are able to explain a substantial fraction of the cross-sectional variation in portfolio returns. These models perform much better than unconditional (C)CAPM specifications, and about as well as the three-factor Fama-French model on portfolios sorted by size and book-to-market ratios. This specification of the linear conditional consumption CAPM, using aggregate consumption data, is able to account for the difference in returns between low book-to-market and high book-to-market firms and exhibits little evidence of residual size or book-to-market effects. (JEL G10, E21)",
"title": ""
},
{
"docid": "89a7eea800f411107993ea85987a64c5",
"text": "Multi-aspect data appear frequently in many web-related applications. For example, product reviews are quadruplets of (user, product, keyword, timestamp). How can we analyze such web-scale multi-aspect data? Can we analyze them on an off-the-shelf workstation with limited amount of memory?\n Tucker decomposition has been widely used for discovering patterns in relationships among entities in multi-aspect data, naturally expressed as high-order tensors. However, existing algorithms for Tucker decomposition have limited scalability, and especially, fail to decompose high-order tensors since they explicitly materialize intermediate data, whose size rapidly grows as the order increases (≥ 4). We call this problem M-Bottleneck (\"Materialization Bottleneck\").\n To avoid M-Bottleneck, we propose S-HOT, a scalable high-order tucker decomposition method that employs the on-the-fly computation to minimize the materialized intermediate data. Moreover, S-HOT is designed for handling disk-resident tensors, too large to fit in memory, without loading them all in memory at once. We provide theoretical analysis on the amount of memory space and the number of scans of data required by S-HOT. In our experiments, S-HOT showed better scalability not only with the order but also with the dimensionality and the rank than baseline methods. In particular, S-HOT decomposed tensors 1000× larger than baseline methods in terms dimensionality. S- HOT also successfully analyzed real-world tensors that are both large-scale and high-order on an off-the-shelf workstation with limited amount of memory, while baseline methods failed. The source code of S-HOT is publicly available at http://dm.postech.ac.kr/shot to encourage reproducibility.",
"title": ""
},
{
"docid": "413112cc78df9fac45a254c74049f724",
"text": "We are developing compact, high-power chargers for rapid charging of energy storage capacitors. The main application is presently rapid charging of the capacitors inside of compact Marx generators for reprated operation. Compact Marx generators produce output pulses with amplitudes above 300 kV with ns or subns rise-times. A typical application is the generation of high power microwaves. Initially all energy storage capacitors in a Marx generator are charged in parallel. During the so-called erection cycle, the capacitors are connected in series. The charging voltage in the parallel configuration is around 40-50 kV. The input voltage of our charger is in the range of several hundred volts. Rapid charging of the capacitors in the parallel configuration will enable a high pulse repetition-rate of the compact Marx generator. The high power charger uses state-of-the-art IGBTs (isolated gate bipolar transistors) in an H-bridge topology and a compact, high frequency transformer. The IGBTs and the associated controls are packaged for minimum weight and maximum power density. The packaging and device selection makes use of burst mode operation (thermal inertia) of the charger. The present charger is considerably smaller than the one presented in Giesselmann, M et al., (2001).",
"title": ""
},
{
"docid": "cf26ade7932ba0c5deb01e4b3d2463bb",
"text": "Researchers are often confused about what can be inferred from significance tests. One problem occurs when people apply Bayesian intuitions to significance testing-two approaches that must be firmly separated. This article presents some common situations in which the approaches come to different conclusions; you can see where your intuitions initially lie. The situations include multiple testing, deciding when to stop running participants, and when a theory was thought of relative to finding out results. The interpretation of nonsignificant results has also been persistently problematic in a way that Bayesian inference can clarify. The Bayesian and orthodox approaches are placed in the context of different notions of rationality, and I accuse myself and others as having been irrational in the way we have been using statistics on a key notion of rationality. The reader is shown how to apply Bayesian inference in practice, using free online software, to allow more coherent inferences from data.",
"title": ""
},
{
"docid": "5a11ab9ece5295d4d1d16401625ab3d4",
"text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.",
"title": ""
},
{
"docid": "fb0875ee874dc0ada51d0097993e16c8",
"text": "The literature on testing effects is vast but supports surprisingly few prescriptive conclusions for how to schedule practice to achieve both durable and efficient learning. Key limitations are that few studies have examined the effects of initial learning criterion or the effects of relearning, and no prior research has examined the combined effects of these 2 factors. Across 3 experiments, 533 students learned conceptual material via retrieval practice with restudy. Items were practiced until they were correctly recalled from 1 to 4 times during an initial learning session and were then practiced again to 1 correct recall in 1-5 subsequent relearning sessions (across experiments, more than 100,000 short-answer recall responses were collected and hand-scored). Durability was measured by cued recall and rate of relearning 1-4 months after practice, and efficiency was measured by total practice trials across sessions. A consistent qualitative pattern emerged: The effects of initial learning criterion and relearning were subadditive, such that the effects of initial learning criterion were strong prior to relearning but then diminished as relearning increased. Relearning had pronounced effects on long-term retention with a relatively minimal cost in terms of additional practice trials. On the basis of the overall patterns of durability and efficiency, our prescriptive conclusion for students is to practice recalling concepts to an initial criterion of 3 correct recalls and then to relearn them 3 times at widely spaced intervals.",
"title": ""
},
{
"docid": "e065cabd0cc5e95493a3ede4e3d1eeee",
"text": "In this paper we introduced an alternative view of text mining and we review several alternative views proposed by different authors. We propose a classification of text mining techniques into two main groups: techniques based on inductive inference, that we call text data mining (TDM, comprising most of the existing proposals in the literature), and techniques based on deductive or abductive inference, that we call text knowledge mining (TKM). To our knowledge, the TKM view of text mining is new though, as we shall show, several existing techniques could be considered in this group. We discuss about the possibilities and challenges of TKM techniques. We also discuss about the application of existing theories in possible future research in this field.",
"title": ""
},
{
"docid": "e3a4a8470fe3fdbd8f49386ee39de8d4",
"text": "This paper studies the problem of categorical data clustering, especially for transactional data characterized by high dimensionality and large volume. Starting from a heuristic method of increasing the height-to-width ratio of the cluster histogram, we develop a novel algorithm -- CLOPE, which is very fast and scalable, while being quite effective. We demonstrate the performance of our algorithm on two real world datasets, and compare CLOPE with the state-of-art algorithms.",
"title": ""
},
{
"docid": "57390f3fdf19f09d127a53e74337fe06",
"text": "As a competitor for Li4Ti5O12 with a higher capacity and extreme safety, monoclinic TiNb2O7 has been considered as a promising anode material for next-generation high power lithium ion batteries. However, TiNb2O7 suffers from low electronic conductivity and ionic conductivity, which restricts the electrochemical kinetics. Herein, a facile and advanced architecture design of hierarchical TiNb2O7 microspheres is successfully developed for large-scale preparation without any surfactant assistance. To the best of our knowledge, this is the first report on the one step solvothermal synthesis of TiNb2O7 microspheres with micro- and nano-scale composite structures. When evaluated as an anode material for lithium ion batteries, the electrode exhibits excellent high rate capacities and ultra-long cyclability, such as 258 mA h g(-1) at 1 C, 175 mA h g(-1) at 5 C, and 138 mA h g(-1) at 10 C, extending to more than 500 cycles.",
"title": ""
},
{
"docid": "73577f4be1e148387ce747546c31b161",
"text": "Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results. In this work we propose an intuitive method for mapping three previously annotated corpora onto a single factuality scale, thereby enabling models to be tested across these corpora. In addition, we design a novel model for factuality prediction by first extending a previous rule-based factuality prediction system and applying it over an abstraction of dependency trees, and then using the output of this system in a supervised classifier. We show that this model outperforms previous methods on all three datasets. We make both the unified factuality corpus and our new model publicly available.",
"title": ""
},
{
"docid": "c993d3a77bcd272e8eadc66155ee15e1",
"text": "This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a family of optimization methods known as the Concave-Convex Procedure (CCCP) that converge to a local optimal solution. The inference algorithm consists of two components: 1) Detecting top candidates for the pose templates, and 2) computing the sequence of pose templates. Both are done by dynamic programming or, more precisely, beam search. In experiments, we demonstrate that this method is capable of discovering salient poses of actions as well as interactions with contextual objects. We test our method on several public action data sets and a challenging outdoor contextual action data set collected by ourselves. The results show that our model achieves comparable or better performance compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "7ebbc954715513958c88741b7c195403",
"text": "This paper deals with the application of OCR methods to historical printings of Latin texts. Whereas the problem of recognizing historical printings of modern languages has been the subject of the IMPACT program, Latin has not yet been given any serious consideration despite the fact that it dominated literature production in Europe up to the 17th century. Using finite state tools and methods developed during the IMPACT program we show that efficent batch-oriented post-correction can work for Latin as well, and that a lexicon of historical Latin spelling variants can be constructed to aid in the correction phase. Initial experiments for the OCR engines Tesseract and OCRopus show that some training on historical fonts and the application of lexical resources raise character accuracies beyond those of Finereader and that accuracies above 90% may be expected even for 16th century material.",
"title": ""
},
{
"docid": "caf2fa85a302c289decab3a2a5b56566",
"text": "Cross-domain research topic mining can help users find relationships among related research domains and obtain a quick overview of these domains. This study investigates the evolution of crossdomain topics of three interdisciplinary research domains and uses a visual analytic approach to determine unique topics for each domain. This study also focuses on topic evolution over 10 years and on individual topics of cross domains. A hierarchical topic model is adopted to extract topics of three different domains and to correlate the extracted topics. A simple yet effective visualization interface is then designed, and certain interaction operations are provided to help users more deeply understand the visualization development trend and the correlation among the three domains. Finally, a case study is conducted to demonstrate the effectiveness of the proposed method.",
"title": ""
}
] |
scidocsrr
|
7fce391cdd755ae26b58fa40465b2870
|
A Nonlinear Control Law for Hover to Level Flight for the Quad Tiltrotor UAV
|
[
{
"docid": "19640a92b42bf5c6972ba14f2d0821bb",
"text": "This paper describes the modeling, control and hardware implementation of an experimental tilt-rotor aircraft. This vehicle combines the high-speed cruise capabilities of a conventional airplane with the hovering capabilities of a helicopter by tilting their four rotors. Changing between cruise and hover flight modes in midair is referred to transition. Dynamic model of the vehicle is derived both for vertical and horizontal flight modes using Newtonian approach. Two nonlinear control strategies are presented and evaluated at simulation level to control, the vertical and horizontal flight dynamics of the vehicle in the longitudinal plane. An experimental prototype named Quad-plane was developed to perform the vertical flight. A low-cost DSP-based Embedded Flight Control System (EFCS) was deThis work was partially supported by the Institute for Science & Technology of Mexico City (ICyTDF) and the French National Centre for Scientific Research (CNRS). G. R. Flores (B) · J. Escareño · R. Lozano Heudiasyc Laboratory, University of Technology of Compiègne, CNRS 6599, 60205 Compiègne, France e-mail: gfloresc@hds.utc.fr S. Salazar French-Mexican Laboratory on Computer Science and Control, CINVESTAV, Mexico City, Mexico signed and built to achieve autonomous attitudestabilized flight.",
"title": ""
}
] |
[
{
"docid": "4b75c7158f6c20542385d08eca9bddb3",
"text": "PURPOSE\nExtraarticular manifestations of the joint hypermobility syndrome may include the peripheral nervous system. The purpose of this study was to investigate autonomic function in patients with this syndrome.\n\n\nMETHODS\nForty-eight patients with the joint hypermobility syndrome who fulfilled the 1998 Brighton criteria and 30 healthy control subjects answered a clinical questionnaire designed to evaluate the frequency of complaints related to the autonomic nervous system. Next, 27 patients and 21 controls underwent autonomic evaluation: orthostatic testing, cardiovascular vagal and sympathetic functions, catecholamine levels, and adrenoreceptor responsiveness.\n\n\nRESULTS\nSymptoms related to the autonomic nervous system, such as syncope and presyncope, palpitations, chest discomfort, fatigue, and heat intolerance, were significantly more common among patients. Orthostatic hypotension, postural orthostatic tachycardia syndrome, and uncategorized orthostatic intolerance were found in 78% (21/27) of patients compared with in 10% (2/21) of controls. Patients with the syndrome had a greater mean (+/- SD) drop in systolic blood pressure during hyperventilation than did controls (-11 +/- 7 mm Hg vs. -5 +/- 5 mm Hg, P = 0.02) and a greater increase in systolic blood pressure after a cold pressor test (19 +/- 10 mm Hg vs. 11 +/- 13 mm Hg, P = 0.06). Patients with the syndrome also had evidence of alpha-adrenergic (as assessed by administration of phenylephrine) and beta-adrenergic hyperresponsiveness (as assessed by administration of isoproterenol).\n\n\nCONCLUSION\nThe autonomic nervous system-related symptoms of the patients have a pathophysiological basis, which suggests that dysautonomia is an extraarticular manifestation in the joint hypermobility syndrome.",
"title": ""
},
{
"docid": "8108c37cc3f3160c78252fcfbeb8d2f2",
"text": "It is well understood that the pancreas has two distinct roles: the endocrine and exocrine functions, that are functionally and anatomically closely related. As specialists in diabetes care, we are adept at managing pancreatic endocrine failure and its associated complications. However, there is frequent overlap and many patients with diabetes also suffer from exocrine insufficiency. Here we outline the different causes of exocrine failure, and in particular that associated with type 1 and type 2 diabetes and how this differs from diabetes that is caused by pancreatic exocrine disease: type 3c diabetes. Copyright © 2017 John Wiley & Sons. Practical Diabetes 2017; 34(6): 200–204",
"title": ""
},
{
"docid": "d311bfc22c30e860c529b2aeb16b6d40",
"text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.",
"title": ""
},
{
"docid": "18cf88b01ff2b20d17590d7b703a41cb",
"text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.",
"title": ""
},
{
"docid": "8f88620de9b4a4d8702eaf3d962e7326",
"text": "To have automatic conversations between human and computer is regarded as one of the most hardcore problems in computer science. Conversational systems are of growing importance due to their promising potentials and commercial values as virtual assistants and chatbots. To build such systems with adequate intelligence is challenging, and requires abundant resources including an acquisition of big conversational data and interdisciplinary techniques, such as content analysis, text mining, and retrieval. The arrival of big data era reveals the feasibility to create a conversational system empowered by data-driven approaches. Now we are able to collect an extremely large number of human-human conversations on Web, and organize them to launch human-computer conversational systems. Given a human issued utterance, i.e., a query, a conversational system will search for appropriate responses, conduct relevance ranking using contexts information, and then output the highly relevant result. In this paper, we propose a novel context modeling framework with end-to-end neural networks for human-computer conversational systems. The proposed model is general and unified. In the experiments, we demonstrate the effectiveness of the proposed model for human-computer conversations using p@1, MAP, nDCG, and MRR metrics.",
"title": ""
},
{
"docid": "f033c98f752c8484dc616425ebb7ce5b",
"text": "Ethnography is the study of social interactions, behaviours, and perceptions that occur within groups, teams, organisations, and communities. Its roots canbe traced back to anthropological studies of small, rural (andoften remote) societies thatwereundertaken in the early 1900s, when researchers such as Bronislaw Malinowski and Alfred Radcliffe-Brown participated in these societies over long periods and documented their social arrangements and belief systems. This approach was later adopted by members of the Chicago School of Sociology (for example, Everett Hughes, Robert Park, Louis Wirth) and applied to a variety of urban settings in their studies of social life. The central aim of ethnography is to provide rich, holistic insights into people’s views and actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the collection of detailed observations and interviews. As Hammersley states, “The task [of ethnographers] is to document the culture, the perspectives and practices, of the people in these settings.The aim is to ‘get inside’ theway each groupof people sees theworld.” Box 1 outlines the key features of ethnographic research. Examples of ethnographic researchwithin thehealth services literature include Strauss’s study of achieving and maintaining order between managers, clinicians, and patients within psychiatric hospital settings; Taxis and Barber’s exploration of intravenous medication errors in acute care hospitals; Costello’s examination of death and dying in elderly care wards; and Østerlund’s work on doctors’ and nurses’ use of traditional and digital information systems in their clinical communications. Becker and colleagues’ Boys in White, an ethnographic study of medical education in the late 1950s, remains a classic in this field. Newer developments in ethnographic inquiry include auto-ethnography, in which researchers’ own thoughts andperspectives fromtheir social interactions form the central element of a study; meta-ethnography, in which qualitative research texts are analysed and synthesised to empirically create new insights and knowledge; and online (or virtual) ethnography, which extends traditional notions of ethnographic study from situated observation and face to face researcher-participant interaction to technologically mediated interactions in online networks and communities.",
"title": ""
},
{
"docid": "f5bea5413ad33191278d7630a7e18e39",
"text": "Speech activity detection (SAD) on channel transmissions is a critical preprocessing task for speech, speaker and language recognition or for further human analysis. This paper presents a feature combination approach to improve SAD on highly channel degraded speech as part of the Defense Advanced Research Projects Agency’s (DARPA) Robust Automatic Transcription of Speech (RATS) program. The key contribution is the feature combination exploration of different novel SAD features based on pitch and spectro-temporal processing and the standard Mel Frequency Cepstral Coefficients (MFCC) acoustic feature. The SAD features are: (1) a GABOR feature representation, followed by a multilayer perceptron (MLP); (2) a feature that combines multiple voicing features and spectral flux measures (Combo); (3) a feature based on subband autocorrelation (SAcC) and MLP postprocessing and (4) a multiband comb-filter F0 (MBCombF0) voicing measure. We present single, pairwise and all feature combinations, show high error reductions from pairwise feature level combination over the MFCC baseline and show that the best performance is achieved by the combination of all features.",
"title": ""
},
{
"docid": "d17fa50cbcc0858d99e8833f78a5d96b",
"text": "Cryptographic hash functions are a useful building block for several cryptographic applications. The most important are certainly the protection of information authentication and digital signatures. This overview paper will discuss the definitions, describe some attacks on hash functions, and will give an overview of the existing practical constructions.",
"title": ""
},
{
"docid": "1e474d00718f2937f9600b64dd84b642",
"text": "Abstract Background: Allergies manifest in various forms, which can be mild and unnoticeable to life threatening anaphylaxis. To pin point the exact agent causing the allergy is a challenge. Case Description: A 6 year old boy presented with pain and burning sensation in the right side of the mouth since 10 days. His mother gave a history of visiting a dentist 1 month back where he was prescribed fluoridated pediatric toothpaste. On using the toothpaste the patient noticed occurrence of small fluid filled boils in the right cheek region and the right side of the tongue which slowly increased in size and turned flat. Since 10 days the pain is severe and causing difficulty to open the mouth, talk and eat. Clinical implications: Dentists should exercise care when prescribing any new products to patients and be aware of their allergic potential. Also any allergic manifestations should be recognized early and managed appropriately.",
"title": ""
},
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "c86c10428bfca028611a5e989ca31d3f",
"text": "In the study, we discussed the ARCH/GARCH family models and enhanced them with artificial neural networks to evaluate the volatility of daily returns for 23.10.1987–22.02.2008 period in Istanbul Stock Exchange. We proposed ANN-APGARCH model to increase the forecasting performance of APGARCH model. The ANN-extended versions of the obtained GARCH models improved forecast results. It is noteworthy that daily returns in the ISE show strong volatility clustering, asymmetry and nonlinearity characteristics. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9b7b72ec1feac5f164b4d64d103738d",
"text": "The success of the supervised classification of remotely sensed images acquired over large geographical areas or at short time intervals strongly depends on the representativity of the samples used to train the classification algorithm and to define the model. When training samples are collected from an image or a spatial region that is different from the one used for mapping, spectral shifts between the two distributions are likely to make the model fail. Such shifts are generally due to differences in acquisition and atmospheric conditions or to changes in the nature of the object observed. To design classification methods that are robust to data set shifts, recent remote sensing literature has considered solutions based on domain adaptation (DA) approaches. Inspired by machine-learning literature, several DA methods have been proposed to solve specific problems in remote sensing data classification. This article provides a critical review of the recent advances in DA approaches for remote sensing and presents an overview of DA methods divided into four categories: 1) invariant feature selection, 2) representation matching, 3) adaptation of classifiers, and 4) selective sampling. We provide an overview of recent methodologies, examples of applications of the considered techniques to real remote sensing images characterized by very high spatial and spectral resolution as well as possible guidelines for the selection of the method to use in real application scenarios.",
"title": ""
},
{
"docid": "e1a18dfd191c0708565481b2c9decd6e",
"text": "The emergence of co-processors such as Intel Many Integrated Cores (MICs) is changing the landscape of supercomputing. The MIC is a memory constrained environment and its processors also operate at slower clock rates. Furthermore, the communication characteristics between MIC processes are also different compared to communication between host processes. Communication libraries that do not consider these architectural subtleties cannot deliver good communication performance. The performance of MPI collective operations strongly affect the performance of parallel applications. Owing to the challenges introduced by the emerging heterogeneous systems, it is critical to fundamentally re-design collective algorithms to ensure that applications can fully leverage the MIC architecture. In this paper, we propose a generic framework to optimize the performance of important collective operations, such as, MPI Bcast, MPI Reduce and MPI Allreduce, on Intel MIC clusters. We also present a detailed analysis of the compute phases in reduce operations for MIC clusters. To the best of our knowledge, this is the first paper to propose novel designs to improve the performance of collectives on MIC clusters. Our designs improve the latency of the MPI Bcast operation with 4,864 MPI processes by up to 76%. We also observe up to 52.4% improvements in the communication latency of the MPI Allreduce operation with 2K MPI processes on heterogeneous MIC clusters. Our designs also improve the execution time of the WindJammer application by up to 16%.",
"title": ""
},
{
"docid": "f117503bf48ea9ddf575dedf196d3bcd",
"text": "In recent years, prison officials have increasingly turned to solitary confinement as a way to manage difficult or dangerous prisoners. Many of the prisoners subjected to isolation, which can extend for years, have serious mental illness, and the conditions of solitary confinement can exacerbate their symptoms or provoke recurrence. Prison rules for isolated prisoners, however, greatly restrict the nature and quantity of mental health services that they can receive. In this article, we describe the use of isolation (called segregation by prison officials) to confine prisoners with serious mental illness, the psychological consequences of such confinement, and the response of U.S. courts and human rights experts. We then address the challenges and human rights responsibilities of physicians confronting this prison practice. We conclude by urging professional organizations to adopt formal positions against the prolonged isolation of prisoners with serious mental illness.",
"title": ""
},
{
"docid": "3832812ee527c811a504c10619c59ee3",
"text": "The growing need of the driving public for accurate traffic information has spurred the deployment of large scale dedicated monitoring infrastructure systems, which mainly consist in the use of inductive loop detectors and video cameras. On-board electronic devices have been proposed as an alternative traffic sensing infrastructure, as they usually provide a cost-effective way to collect traffic data, leveraging existing communication infrastructure such as the cellular phone network. A traffic monitoring system based on GPS-enabled smartphones exploits the extensive coverage provided by the cellular network, the high accuracy in position and velocity measurements provided by GPS devices, and the existing infrastructure of the communication network. This article presents a field experiment nicknamed Mobile Century, which was conceived as a proof of concept of such a system. Mobile Century included 100 vehicles carrying a GPS-enabled Nokia N95 phone driving loops on a 10-mile stretch of I-880 near Union City, California, for 8 hours. Data were collected using virtual trip lines, which are geographical markers stored in the handset that probabilistically trigger position and speed updates when the handset crosses them. The proposed prototype system provided sufficient data for traffic monitoring purposes while managing the privacy of participants. The data obtained in the experiment were processed in real-time and successfully broadcast on the internet, demonstrating the feasibility of the proposed system for real-time traffic monitoring. Results suggest that a 2-3% penetration of cell phones in the driver population is enough to provide accurate measurements of the velocity of the traffic flow.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "4175a43d90c597a9c875a8bfafe05977",
"text": "Exploitable software vulnerabilities pose severe threats to its information security and privacy. Although a great amount of efforts have been dedicated to improving software security, research on quantifying software exploitability is still in its infancy. In this work, we propose ExploitMeter, a fuzzing-based framework of quantifying software exploitability that facilitates decision-making for software assurance and cyber insurance. Designed to be dynamic, efficient and rigorous, ExploitMeter integrates machine learning-based prediction and dynamic fuzzing tests in a Bayesian manner. Using 100 Linux applications, we conduct extensive experiments to evaluate the performance of ExploitMeter in a dynamic environment.",
"title": ""
},
{
"docid": "a75d3395a1d4859b465ccbed8647fbfe",
"text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.",
"title": ""
},
{
"docid": "2113655d3467fbdbf7769e36952d2a6f",
"text": "The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature make an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey, we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over 80 privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement.",
"title": ""
},
{
"docid": "7ae332505306f94f8f2b4e3903188126",
"text": "Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.",
"title": ""
}
] |
scidocsrr
|
640cf5fecf7f28e08f56e1bec62dd61c
|
MgNet: A Unified Framework of Multigrid and Convolutional Neural Network
|
[
{
"docid": "e459bd355ea9a009e0d69c11e96d1173",
"text": "Based on a natural connection between ResNet and transport equation or its characteristic equation, we propose a continuous flow model for both ResNet and plain net. Through this continuous model, a ResNet can be explicitly constructed as a refinement of a plain net. The flow model provides an alternative perspective to understand phenomena in deep neural networks, such as why it is necessary and sufficient to use 2-layer blocks in ResNets, why deeper is better, and why ResNets are even deeper, and so on. It also opens a gate to bring in more tools from the huge area of differential equations.",
"title": ""
},
{
"docid": "9e11005f60aa3f53481ac3543a18f32f",
"text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.",
"title": ""
},
{
"docid": "1d2485f8a4e2a5a9f983bfee3e036b92",
"text": "Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.",
"title": ""
}
] |
[
{
"docid": "1635b235c59cc57682735202c0bb2e0d",
"text": "The introduction of structural imaging of the brain by computed tomography (CT) scans and magnetic resonance imaging (MRI) has further refined classification of head injury for prognostic, diagnosis, and treatment purposes. We describe a new classification scheme to be used both as a research and a clinical tool in association with other predictors of neurologic status.",
"title": ""
},
{
"docid": "db6a91e0216440a4573aee6c78c78cbf",
"text": "ObjectiveHeart rate monitoring using wrist type Photoplethysmographic (PPG) signals is getting popularity because of construction simplicity and low cost of wearable devices. The task becomes very difficult due to the presence of various motion artifacts. The objective is to develop algorithms to reduce the effect of motion artifacts and thus obtain accurate heart rate estimation. MethodsProposed heart rate estimation scheme utilizes both time and frequency domain analyses. Unlike conventional single stage adaptive filter, multi-stage cascaded adaptive filtering is introduced by using three channel accelerometer data to reduce the effect of motion artifacts. Both recursive least squares (RLS) and least mean squares (LMS) adaptive filters are tested. Moreover, singular spectrum analysis (SSA) is employed to obtain improved spectral peak tracking. The outputs from the filter block and SSA operation are logically combined and used for spectral domain heart rate estimation. Finally, a tracking algorithm is incorporated considering neighbouring estimates. ResultsThe proposed method provides an average absolute error of 1.16 beat per minute (BPM) with a standard deviation of 1.74 BPM while tested on publicly available database consisting of recordings from 12 subjects during physical activities. ConclusionIt is found that the proposed method provides consistently better heart rate estimation performance in comparison to that recently reported by TROIKA, JOSS and SPECTRAP methods. SignificanceThe proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach and thus feasible for implementing in wearable devices to monitor heart rate for fitness and clinical purpose.",
"title": ""
},
{
"docid": "6bcc65065f9e1f52bbe0276b4a5d8a45",
"text": "Urban mobility impacts urban life to a great extent. To enhance urban mobility, much research was invested in traveling time prediction: given an origin and destination, provide a passenger with an accurate estimation of how long a journey lasts. In this work, we investigate a novel combination of methods from Queueing Theory and Machine Learning in the prediction process. We propose a prediction engine that, given a scheduled bus journey (route) and a ‘source/destination’ pair, provides an estimate for the traveling time, while considering both historical data and real-time streams of information that are transmitted by buses. We propose a model that uses natural segmentation of the data according to bus stops and a set of predictors, some use learning while others are learning-free, to compute traveling time. Our empirical evaluation, using bus data that comes from the bus network in the city of Dublin, demonstrates that the snapshot principle, taken from Queueing Theory works well yet suffers from outliers. To overcome the outliers problem, we use machine learning techniques as a regulator that assists in identifying outliers and propose prediction based on historical data.",
"title": ""
},
{
"docid": "ea6392b6a49ed40cb5e3779e0d1f3ea2",
"text": "We see the world in scenes, where visual objects occur in rich surroundings, often embedded in a typical context with other related objects. How does the human brain analyse and use these common associations? This article reviews the knowledge that is available, proposes specific mechanisms for the contextual facilitation of object recognition, and highlights important open questions. Although much has already been revealed about the cognitive and cortical mechanisms that subserve recognition of individual objects, surprisingly little is known about the neural underpinnings of contextual analysis and scene perception. Building on previous findings, we now have the means to address the question of how the brain integrates individual elements to construct the visual experience.",
"title": ""
},
{
"docid": "32acba3e072e0113759278c57ee2aee2",
"text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.",
"title": ""
},
{
"docid": "c7c63f08639660f935744309350ab1e0",
"text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.",
"title": ""
},
{
"docid": "3fa5544e35e021dcf64f882d79cf25fd",
"text": "This article reviews methodological issues that arise in the application of exploratory factor analysis (EFA) to scale revision and refinement. The authors begin by discussing how the appropriate use of EFA in scale revision is influenced by both the hierarchical nature of psychological constructs and the motivations underlying the revision. Then they specifically address (a) important issues that arise prior to data collection (e.g., selecting an appropriate sample), (b) technical aspects of factor analysis (e.g., determining the number of factors to retain), and (c) procedures used to evaluate the outcome of the scale revision (e.g., determining whether the new measure functions equivalently for different populations).",
"title": ""
},
{
"docid": "a71c53aed6a6805a5ebf0f69377411c0",
"text": "We here illustrate a new indoor navigation system. It is an outcome of creativity, which merges an imaginative scenario and new technologies. The system intends to guide a person in unknown building by relying on technologies which do not depend on infrastructures. The system includes two key components, namely positioning and path planning. Positioning is based on geomagnetic fields, and it overcomes the several limits of WIFI and Bluetooth, etc. Path planning is based on a new and optimized Ant Colony algorithm, called Ant Colony Optimization (ACO), which offers better performances than the classic A* algorithms. The paper illustrates the logic and the architecture of the system, and also presents experimental results.",
"title": ""
},
{
"docid": "1203822bf82dcd890e7a7a60fb282ce5",
"text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016",
"title": ""
},
{
"docid": "0bdb1d537011582c599a68f70881b274",
"text": "This article examines the acquisition of vocational skills through apprenticeship-type situated learning. Findings from a studies of skilled workers revealed that learning processes that were consonant with the apprenticeship model of learning were highly valued as a means of acquiring and maintaining vocational skills. Supported by current research and theorising, this article, describes some conditions by which situated learning through apprenticeship can be utilised to develop vocational skills. These conditions include the nature of the activities learners engage in, the agency of the learning environment and mentoring role of experts. Conditions which may inhibit the effectiveness of an apprenticeship approach to learning are also addressed. The article concludes by suggesting that situated approaches to learning, such as the apprenticeship model may address problems of access to effective vocational skill development within the workforce.",
"title": ""
},
{
"docid": "7b8fc04274ac8c01fd1619185ebe42c9",
"text": "There are a few types of step-climbing wheelchairs in the world, but most of them are large and heavy because they are power-assisted. Therefore, they require large space to maneuver, which is not always feasible with existing house architectures. This study proposes a novel step-climbing wheelchair based on lever propulsion control using human upper limbs. The developed step-climbing wheelchair device consists of manual wheels with casters for moving around and a rotary-legs mechanism that is capable of climbing steps. The wheelchair also has a passive mechanism for posture transition to shift the center of gravity of the person between the desired positions for planar locomotion and step-climbing. The proposed design consists of passive parts, and this leads the wheelchair being compact and lightweight. In this paper, we present the design of this step-climbing wheelchair and some preliminary experiments to test its usability.",
"title": ""
},
{
"docid": "50dd728b4157aefb7df35366f5822d0d",
"text": "This paper describes iDriver, an iPhone software to remote control “Spirit of Berlin”. “Spirit of Berlin” is a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. iDriver is an iPhone application sending control packets to the car in order to remote control its steering wheel, gas and brake pedal, gear shift and turn signals. Additionally, a video stream from two top-mounted cameras is broadcasted back to the iPhone.",
"title": ""
},
{
"docid": "67d704317471c71842a1dfe74ddd324a",
"text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.",
"title": ""
},
{
"docid": "2afeb302ce217ead9d2d66d02460f9ff",
"text": "The development of IoT technologies and the massive admiration and acceptance of social media tools and applications, new doors of opportunity have been opened for using data analytics in gaining meaningful insights from unstructured information. The application of opinion mining and sentiment analysis (OMSA) in the era of big data have been used a useful way in categorizing the opinion into different sentiment and in general evaluating the mood of the public. Moreover, different techniques of OMSA have been developed over the years in different data sets and applied to various experimental settings. In this regard, this paper presents a comprehensive systematic literature review, aims to discuss both technical aspect of OMSA (techniques and types) and non-technical aspect in the form of application areas are discussed. Furthermore, this paper also highlighted both technical aspects of OMSA in the form of challenges in the development of its technique and non-technical challenges mainly based on its application. These challenges are presented as a future direction for research.",
"title": ""
},
{
"docid": "9cf9145a802c2093f7c6f5986aabb352",
"text": "Although researchers have long studied using statistical modeling techniques to detect anomaly intrusion and profile user behavior, the feasibility of applying multinomial logistic regression modeling to predict multi-attack types has not been addressed, and the risk factors associated with individual major attacks remain unclear. To address the gaps, this study used the KDD-cup 1999 data and bootstrap simulation method to fit 3000 multinomial logistic regression models with the most frequent attack types (probe, DoS, U2R, and R2L) as an unordered independent variable, and identified 13 risk factors that are statistically significantly associated with these attacks. These risk factors were then used to construct a final multinomial model that had an ROC area of 0.99 for detecting abnormal events. Compared with the top KDD-cup 1999 winning results that were based on a rule-based decision tree algorithm, the multinomial logistic model-based classification results had similar sensitivity values in detecting normal and a significantly lower overall misclassification rate (18.9% vs. 35.7%). The study emphasizes that the multinomial logistic regression modeling technique with the 13 risk factors provides a robust approach to detect anomaly intrusion.",
"title": ""
},
{
"docid": "7e8a161ba96ef2f36818479023ad0551",
"text": "Computational thinking (CT) is being located at the focus of educational innovation, as a set of problemsolving skills that must be acquired by the new generations of students to thrive in a digital world full of objects driven by software. However, there is still no consensus on a CT definition or how to measure it. In response, we attempt to address both issues from a psychometric approach. On the one hand, a Computational Thinking Test (CTt) is administered on a sample of 1,251 Spanish students from 5th to 10th grade, so its descriptive statistics and reliability are reported in this paper. On the second hand, the criterion validity of the CTt is studied with respect to other standardized psychological tests: the Primary Mental Abilities (PMA) battery, and the RP30 problem-solving test. Thus, it is intended to provide a new instrument for CT measurement and additionally give evidence of the nature of CT through its associations with key related psychological constructs. Results show statistically significant correlations at least moderately intense between CT and: spatial ability (r 1⁄4 0.44), reasoning ability (r 1⁄4 0.44), and problemsolving ability (r 1⁄4 0.67). These results are consistent with recent theoretical proposals linking CT to some components of the Cattel-Horn-Carroll (CHC) model of intelligence, and corroborate the conceptualization of CT as a problem-solving ability. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f70b6d0a0b315a1ca87ccf5184c43da4",
"text": "Transmitting secret information through internet requires more security because of interception and improper manipulation by eavesdropper. One of the most desirable explications of this is “Steganography”. This paper proposes a technique of steganography using Advanced Encryption Standard (AES) with secured hash function in the blue channel of image. The embedding system is done by dynamic bit adjusting system in blue channel of RGB images. It embeds message bits to deeper into the image intensity which is very difficult for any type improper manipulation of hackers. Before embedding text is encrypted using AES with a hash function. For extraction the cipher text bit is found from image intensity using the bit adjusting extraction algorithm and then it is decrypted by AES with same hash function to get the real secret text. The proposed approach is better in Pick Signal to Noise Ratio (PSNR) value and less in histogram error between stego images and cover images than some existing systems. KeywordsAES-128, SHA-512, Cover Image, Stego image, Bit Adjusting, Blue Channel",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "4229e2db880628ea2f0922a94c30efe0",
"text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.",
"title": ""
}
] |
scidocsrr
|
330a99a69c0bd378d87d788665503dac
|
PropFuzz — An IT-security fuzzing framework for proprietary ICS protocols
|
[
{
"docid": "049c9e3abf58bfd504fa0645bb4d1fdc",
"text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.",
"title": ""
},
{
"docid": "0ecb00d99dc497a0e902cda198219dff",
"text": "Security vulnerabilities typically arise from bugs in input validation and in the application logic. Fuzz-testing is a popular security evaluation technique in which hostile inputs are crafted and passed to the target software in order to reveal bugs. However, in the case of SCADA systems, the use of proprietary protocols makes it difficult to apply existing fuzz-testing techniques as they work best when the protocol semantics are known, targets can be instrumented and large network traces are available. This paper describes a fuzz-testing solution involving LZFuzz, an inline tool that provides a domain expert with the ability to effectively fuzz SCADA devices.",
"title": ""
}
] |
[
{
"docid": "9c4c13c38e2b96aa3141b1300ca356c6",
"text": "Awareness plays a major role in human cognition and adaptive behaviour, though mechanisms involved remain unknown. Awareness is not an objectively established fact, therefore, despite extensive research, scientists have not been able to fully interpret its contribution in multisensory integration and precise neural firing, hence, questions remain: (1) How the biological neuron integrates the incoming multisensory signals with respect to different situations? (2) How are the roles of incoming multisensory signals defined (selective amplification or attenuation) that help neuron(s) to originate a precise neural firing complying with the anticipated behavioural-constraint of the environment? (3) How are the external environment and anticipated behaviour integrated? Recently, scientists have exploited deep learning architectures to integrate multimodal cues and capture context-dependent meanings. Yet, these methods suffer from imprecise behavioural representation and a limited understanding of neural circuitry or underlying information processing mechanisms with respect to the outside world. In this research, we introduce a new theory on the role of awareness and universal context that can help answering the aforementioned crucial neuroscience questions. Specifically, we propose a class of spiking conscious neuron in which the output depends on three functionally distinctive integrated input variables: receptive field (RF), local contextual field (LCF), and universal contextual field (UCF) a newly proposed dimension. The RF defines the incoming ambiguous sensory signal, LCF defines the modulatory sensory signal coming from other parts of the brain, and UCF defines the awareness. It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it defines the precise role of incoming multisensory signals (amplification or attenuation) to originate a precise neural firing (exhibiting switch-like behaviour). It is shown, when implemented within an SCNN, the conscious neuron helps modelling a more precise human behaviour e.g., when exploited to model human audiovisual speech processing, the SCNN performed comparably to deep long-short-term memory (LSTM) network. We believe that the proposed theory could be applied to address a range of real-world problems including elusive neural disruptions, explainable artificial intelligence, human-like computing, low-power neuromorphic chips etc.",
"title": ""
},
{
"docid": "149d231199df20bc61f28ddcafdf79c4",
"text": "Hypersexual behavior is often misunderstood and minimized, and we continue to lack an understanding of what underlies this behavior. Without an understanding of the function of hypersexual behavior, we cannot ascertain the most effective treatment. This study was designed to examine the underlying function of such behavior by exploring whether insecure attachment in men relates to the development of hypersexual behavior. A total of 45 men who were assessed as having Out-of-Control Sexual Behavior (OCSB), utilizing the recently proposed Hypersexual Disorder (HD) diagnosis were compared to 32 men who did not present with OCSB. Participants were directed to an online survey where they completed assessments for hypersexual behavior (The HBI) and attachment style (The ECR-S). Multivariate analysis indicated that high ECR-S scores predicted high HBI scores, high HBI scores tended to show high levels of attachment avoidant behavior and that high ECR-S scores were predictive of the clinical determination of HD. High scores on attachment avoidance, rather than attachment anxiety, were most predictive of the clinical determination of HD. Overall, the avoidant behavior score was a better predictor of OCSB than were attachment anxiety scores. Hypersexual behavior may be a particular manifestation of avoidant attachment and it is this underlying issue that must be addressed to effectively treat HD. Degree Type Dissertation Degree Name Doctor of Social Work (DSW) First Advisor Phyllis Solomon Second Advisor Andrea Doyle Third Advisor John Giugliano",
"title": ""
},
{
"docid": "66363a46aa21f982d5934ff7a88efa6f",
"text": "Ensuring that organizational IT is in alignment with and provides support for an organization’s business strategy is critical to business success. Despite this, business strategy and strategic alignment issues are all but ignored in the requirements engineering research literature. We present B-SCP, a requirements engineering framework for organizational IT that directly addresses an organization’s business strategy and the alignment of IT requirements with that strategy. B-SCP integrates the three themes of strategy, context, and process using a requirements engineering notation for each theme. We demonstrate a means of cross-referencing and integrating the notations with each other, enabling explicit traceability between business processes and business strategy. In addition, we show a means of defining requirements problem scope as a Jackson problem diagram by applying a business modeling framework. Our approach is illustrated via application to an exemplar. The case example demonstrates the feasibility of B-SCP, and we present a comparison with other approaches. q 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d3984f8562288fabf0627b15af4dd64a",
"text": "Volumetric representation has been widely used for 3D deep learning in shape analysis due to its generalization ability and regular data format. However, for fine-grained tasks like part segmentation, volumetric data has not been widely adopted compared to other representations. Aiming at delivering an effective volumetric method for 3D shape part segmentation, this paper proposes a novel volumetric convolutional neural network. Our method can extract discriminative features encoding detailed information from voxelized 3D data under limited resolution. To this purpose, a spatial dense extraction (SDE) module is designed to preserve spatial resolution during feature extraction procedure, alleviating the loss of details caused by sub-sampling operations such as max pooling. An attention feature aggregation (AFA) module is also introduced to adaptively select informative features from different abstraction levels, leading to segmentation with both semantic consistency and high accuracy of details. Experimental results demonstrate that promising results can be achieved by using volumetric data, with part segmentation accuracy comparable or superior to state-of-the-art non-volumetric methods.",
"title": ""
},
{
"docid": "33e03ac5663f72166e17d76861fb69c7",
"text": "The critical-period hypothesis for second-language acquisition was tested on data from the 1990 U.S. Census using responses from 2.3 million immigrants with Spanish or Chinese language backgrounds. The analyses tested a key prediction of the hypothesis, namely, that the line regressing second-language attainment on age of immigration would be markedly different on either side of the critical-age point. Predictions tested were that there would be a difference in slope, a difference in the mean while controlling for slope, or both. The results showed large linear effects for level of education and for age of immigration, but a negligible amount of additional variance was accounted for when the parameters for difference in slope and difference in means were estimated. Thus, the pattern of decline in second-language acquisition failed to produce the discontinuity that is an essential hallmark of a critical period.",
"title": ""
},
{
"docid": "f9510400fbd2376748cf636674820b64",
"text": "In this paper, we propose four new variants of the backpropagation algorithm to improve the generalization ability for feedforward neural networks. The basic idea of these methods stems from the Group Lasso concept which deals with the variable selection problem at the group level. There are two main drawbacks when the Group Lasso penalty has been directly employed during network training. They are numerical oscillations and theoretical challenges in computing the gradients at the origin. To overcome these obstacles, smoothing functions have then been introduced by approximating the Group Lasso penalty. Numerical experiments for classification and regression problems demonstrate that the proposed algorithms perform better than the other three classical penalization methods, Weight Decay, Weight Elimination, and Approximate Smoother, on both generalization and pruning efficiency. In addition, detailed simulations based on a specific data set have been performed to compare with some other common pruning strategies, which verify the advantages of the proposed algorithm. The pruning abilities of the proposed strategy have been investigated in detail for a relatively large data set, MNIST, in terms of various smoothing approximation cases.",
"title": ""
},
{
"docid": "3785c5a330da1324834590c8ed92f743",
"text": "We address the problem of joint detection and segmentation of multiple object instances in an image, a key step towards scene understanding. Inspired by data-driven methods, we propose an exemplar-based approach to the task of instance segmentation, in which a set of reference image/shape masks is used to find multiple objects. We design a novel CRF framework that jointly models object appearance, shape deformation, and object occlusion. To tackle the challenging MAP inference problem, we derive an alternating procedure that interleaves object segmentation and shape/appearance adaptation. We evaluate our method on two datasets with instance labels and show promising results.",
"title": ""
},
{
"docid": "30672a5e329d9ed61a65b07f24731c91",
"text": "Combined star-delta windings in electrical machines result in a higher fundamental winding factor and cause a smaller spatial harmonic content. This leads to lower I2R losses in the stator and the rotor winding and thus to an increased efficiency. However, compared to an equivalent six-phase winding, additional spatial harmonics are generated due to the different magnetomotive force in the star and delta part of the winding. In this paper, a complete theory and analysis method for the analytical calculation of the efficiency of induction motors equipped with combined star-delta windings is developed. The method takes into account the additional harmonic content due to the different magnetomotive force in the star and delta part. To check the analysis' validity, an experimental test is reported both on a cage induction motor equipped with a combined star-delta winding in the stator and on a reference motor with the same core but with a classical three-phase winding.",
"title": ""
},
{
"docid": "ec2eb33d3bf01df406409a31cc0a0e1f",
"text": "Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator.",
"title": ""
},
{
"docid": "7ec225f2fd4993feddcf996b576d140f",
"text": "Conventional network representation learning (NRL) models learn low-dimensional vertex representations by simply regarding each edge as a binary or continuous value. However, there exists rich semantic information on edges and the interactions between vertices usually preserve distinct meanings, which are largely neglected by most existing NRL models. In this work, we present a novel Translation-based NRL model, TransNet, by regarding the interactions between vertices as a translation operation. Moreover, we formalize the task of Social Relation Extraction (SRE) to evaluate the capability of NRL methods on modeling the relations between vertices. Experimental results on SRE demonstrate that TransNet significantly outperforms other baseline methods by 10% to 20% on hits@1. The source code and datasets can be obtained from https: //github.com/thunlp/TransNet.",
"title": ""
},
{
"docid": "1adacc7dc452e27024756c36eecb8cae",
"text": "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc83550afd690e371283428647ed806e",
"text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.",
"title": ""
},
{
"docid": "17c65d64f360572a6009c5179457fd19",
"text": "This paper presents an unified convolutional neural network (CNN), named AUMPNet, to perform both Action Units (AUs) detection and intensity estimation on facial images with multiple poses. Although there are a variety of methods in the literature designed for facial expression analysis, only few of them can handle head pose variations. Therefore, it is essential to develop new models to work on non-frontal face images, for instance, those obtained from unconstrained environments. In order to cope with problems raised by pose variations, an unique CNN, based on region and multitask learning, is proposed for both AU detection and intensity estimation tasks. Also, the available head pose information was added to the multitask loss as a constraint to the network optimization, pushing the network towards learning better representations. As opposed to current approaches that require ad hoc models for every single AU in each task, the proposed network simultaneously learns AU occurrence and intensity levels for all AUs. The AUMPNet was evaluated on an extended version of the BP4D-Spontaneous database, which was synthesized into nine different head poses and made available to FG 2017 Facial Expression Recognition and Analysis Challenge (FERA 2017) participants. The achieved results surpass the FERA 2017 baseline, using the challenge metrics, for AU detection by 0.054 in F1-score and 0.182 in ICC(3, 1) for intensity estimation.",
"title": ""
},
{
"docid": "44b1c6e9c63313c16b40d191144a7f13",
"text": "This document focuses on Bidirectional Reflectance Distribution Functions (BRDFs) in the context of computer based image synthesis. An introduction to basic radiometry and BRDF theory is given. Several well known BRDF models are reviewed and compared based on their physical plausibility and the types of surfaces that they are suited for.",
"title": ""
},
{
"docid": "b638e384285bbb03bdc71f2eb2b27ff8",
"text": "In this paper, we present two win predictors for the popular online game Dota 2. The first predictor uses full post-match data and the second predictor uses only hero selection data. We will explore and build upon existing work on the topic as well as detail the specifics of both algorithms including data collection, exploratory analysis, feature selection, modeling, and results.",
"title": ""
},
{
"docid": "5dec9852efc32d0a9b93cd173573abf0",
"text": "Magnitudes and timings of kinematic variables have often been used to investigate technique. Where large inter-participant differences exist, as in basketball, analysis of intra-participant variability may provide an alternative indicator of good technique. The aim of the present study was to investigate the joint kinematics and coordination-variability between missed and successful (swishes) free throw attempts. Collegiate level basketball players performed 20 free throws, during which ball release parameters and player kinematics were recorded. For each participant, three misses and three swishes were randomly selected and analysed. Margins of error were calculated based on the optimal-minimum-speed principle. Differences in outcome were distinguished by ball release speeds statistically lower than the optimal speed (misses -0.12 +/- 0.10m s(-1); swishes -0.02 +/- 0.07m s(-1); P < 0.05). No differences in wrist linear velocity were detected, but as the elbow influences the wrist through velocity-dependent-torques, elbow-wrist angle-angle coordination-variability was quantified using vector-coding and found to increase in misses during the last 0.01 s before ball release (P < 0.05). As the margin of error on release parameters is small, the coordination-variability is small, but the increased coordination-variability just before ball release for misses is proposed to arise from players perceiving the technique to be inappropriate and trying to correct the shot. The synergy or coupling relationship between the elbow and wrist angles to generate the appropriate ball speed is proposed as the mechanism determining success of free-throw shots in experienced players.",
"title": ""
},
{
"docid": "ee6bcb714c118361a51db8f1f8f0e985",
"text": "BACKGROUND\nWe propose the use of serious games to screen for abnormal cognitive status in situations where it may be too costly or impractical to use standard cognitive assessments (eg, emergency departments). If validated, serious games in health care could enable broader availability of efficient and engaging cognitive screening.\n\n\nOBJECTIVE\nThe objective of this work is to demonstrate the feasibility of a game-based cognitive assessment delivered on tablet technology to a clinical sample and to conduct preliminary validation against standard mental status tools commonly used in elderly populations.\n\n\nMETHODS\nWe carried out a feasibility study in a hospital emergency department to evaluate the use of a serious game by elderly adults (N=146; age: mean 80.59, SD 6.00, range 70-94 years). We correlated game performance against a number of standard assessments, including the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and the Confusion Assessment Method (CAM).\n\n\nRESULTS\nAfter a series of modifications, the game could be used by a wide range of elderly patients in the emergency department demonstrating its feasibility for use with these users. Of 146 patients, 141 (96.6%) consented to participate and played our serious game. Refusals to play the game were typically due to concerns of family members rather than unwillingness of the patient to play the game. Performance on the serious game correlated significantly with the MoCA (r=-.339, P <.001) and MMSE (r=-.558, P <.001), and correlated (point-biserial correlation) with the CAM (r=.565, P <.001) and with other cognitive assessments.\n\n\nCONCLUSIONS\nThis research demonstrates the feasibility of using serious games in a clinical setting. Further research is required to demonstrate the validity and reliability of game-based assessments for clinical decision making.",
"title": ""
},
{
"docid": "8da9e8193d4fead65bd38d62a22998a1",
"text": "Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment.",
"title": ""
},
{
"docid": "84f496674fa8c3436f06d4663de3da84",
"text": "The growth of E-Banking has led to an ease of access and 24-hour banking facility for one and all. However, this has led to a rise in e-banking fraud which is a growing problem affecting users around the world. As card is becoming the most prevailing mode of payment for online as well as regular purchase, fraud related with it is also increasing. The drastic upsurge of online banking fraud can be seen as an integrative misuse of social, cyber and physical resources [1]. Thus, the proposed system uses cryptography and steganography technology along with various data mining techniques in order to effectively secure the e-banking process and prevent online fraud.",
"title": ""
}
] |
scidocsrr
|
c059ed9f23dddabec985bcf744ae2e24
|
A statistical framework for fair predictive algorithms
|
[
{
"docid": "eceb513e5d67d66986597555cf16c814",
"text": "This study examines the statistical validation of a recently developed, fourth-generation (4G) risk–need assessment system (Correctional Offender Management Profiling for Alternative Sanctions; COMPAS) that incorporates a range of theoretically relevant criminogenic factors and key factors emerging from meta-analytic studies of recidivism. COMPAS’s automated scoring provides decision support for correctional agencies for placement decisions, offender management, and treatment planning. The article describes the basic features of COMPAS and then examines the predictive validity of the COMPAS risk scales by fitting Cox proportional hazards models to recidivism outcomes in a sample of presentence investigation and probation intake cases (N = 2,328). Results indicate that the predictive validities for the COMPAS recidivism risk model, as assessed by the area under the receiver operating characteristic curve (AUC), equal or exceed similar 4G instruments. The AUCs ranged from .66 to .80 for diverse offender subpopulations across three outcome criteria, with a majority of these exceeding .70.",
"title": ""
}
] |
[
{
"docid": "7d285ca842be3d85d218dd70f851194a",
"text": "CONTEXT\nThe Atkins diet books have sold more than 45 million copies over 40 years, and in the obesity epidemic this diet and accompanying Atkins food products are popular. The diet claims to be effective at producing weight loss despite ad-libitum consumption of fatty meat, butter, and other high-fat dairy products, restricting only the intake of carbohydrates to under 30 g a day. Low-carbohydrate diets have been regarded as fad diets, but recent research questions this view.\n\n\nSTARTING POINT\nA systematic review of low-carbohydrate diets found that the weight loss achieved is associated with the duration of the diet and restriction of energy intake, but not with restriction of carbohydrates. Two groups have reported longer-term randomised studies that compared instruction in the low-carbohydrate diet with a low-fat calorie-reduced diet in obese patients (N Engl J Med 2003; 348: 2082-90; Ann Intern Med 2004; 140: 778-85). Both trials showed better weight loss on the low-carbohydrate diet after 6 months, but no difference after 12 months. WHERE NEXT?: The apparent paradox that ad-libitum intake of high-fat foods produces weight loss might be due to severe restriction of carbohydrate depleting glycogen stores, leading to excretion of bound water, the ketogenic nature of the diet being appetite suppressing, the high protein-content being highly satiating and reducing spontaneous food intake, or limited food choices leading to decreased energy intake. Long-term studies are needed to measure changes in nutritional status and body composition during the low-carbohydrate diet, and to assess fasting and postprandial cardiovascular risk factors and adverse effects. Without that information, low-carbohydrate diets cannot be recommended.",
"title": ""
},
{
"docid": "cf4509b8d2b458f608a7e72165cdf22b",
"text": "Nowadays, blockchain is becoming a synonym for distributed ledger technology. However, blockchain is only one of the specializations in the field and is currently well-covered in existing literature, but mostly from a cryptographic point of view. Besides blockchain technology, a new paradigm is gaining momentum: directed acyclic graphs. The contribution presented in this paper is twofold. Firstly, the paper analyzes distributed ledger technology with an emphasis on the features relevant to distributed systems. Secondly, the paper analyses the usage of directed acyclic graph paradigm in the context of distributed ledgers, and compares it with the blockchain-based solutions. The two paradigms are compared using representative implementations: Bitcoin, Ethereum and Nano. We examine representative solutions in terms of the applied data structures for maintaining the ledger, consensus mechanisms, transaction confirmation confidence, ledger size, and scalability.",
"title": ""
},
{
"docid": "4eb937f806ca01268b5ed1348d0cc40c",
"text": "The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation | modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using planre nement operators. In planning by adaptation, a library plan|an arbitrary node in the plan graph|is the starting point for the search, and the plan-adaptation algorithm can apply both the same re nement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph.",
"title": ""
},
{
"docid": "e920db4a67b32d3fd0da95eafe2ba402",
"text": "This paper describes the pitch tracking techniques using autocorrelation method and AMDF (Average Magnitude Difference Function) method involving the preprocessing and the extraction of pitch pattern. It also presents the implementation and the basic experiments and discussions.",
"title": ""
},
{
"docid": "073ea28d4922c2d9c1ef7945ce4aa9e2",
"text": "The three major solutions for increasing the nominal performance of a CPU are: multiplying the number of cores per socket, expanding the embedded cache memories and use multi-threading to reduce the impact of the deep memory hierarchy. Systems with tens or hundreds of hardware threads, all sharing a cache coherent UMA or NUMA memory space, are today the de-facto standard. While these solutions can easily provide benefits in a multi-program environment, they require recoding of applications to leverage the available parallelism. Threads must synchronize and exchange data, and the overall performance is heavily in influenced by the overhead added by these mechanisms, especially as developers try to exploit finer grain parallelism to be able to use all available resources.",
"title": ""
},
{
"docid": "4bac03c1e5c5cad93595dd38954a8a94",
"text": "This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present SoPhie; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.",
"title": ""
},
{
"docid": "523983cad60a81e0e6694c8d90ab9c3d",
"text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.",
"title": ""
},
{
"docid": "933f8ba333e8cbef574b56348872b313",
"text": "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.",
"title": ""
},
{
"docid": "0eb98d2e5d7e3c46e1ae830c73008fd4",
"text": "Twitter, the most famous micro-blogging service and online social network, collects millions of tweets every day. Due to the length limitation, users usually need to explore other ways to enrich the content of their tweets. Some studies have provided findings to suggest that users can benefit from added hyperlinks in tweets. In this paper, we focus on the hyperlinks in Twitter and propose a new application, called hyperlink recommendation in Twitter. We expect that the recommended hyperlinks can be used to enrich the information of user tweets. A three-way tensor is used to model the user-tweet-hyperlink collaborative relations. Two tensor-based clustering approaches, tensor decomposition-based clustering (TDC) and tensor approximation-based clustering (TAC) are developed to group the users, tweets and hyperlinks with similar interests, or similar contexts. Recommendation is then made based on the reconstructed tensor using cluster information. The evaluation results in terms of Mean Absolute Error (MAE) shows the advantages of both the TDC and TAC approaches over a baseline recommendation approach, i.e., memory-based collaborative filtering. Comparatively, the TAC approach achieves better performance than the TDC approach.",
"title": ""
},
{
"docid": "c5ca0bce645a6d460ca3e01e4150cce5",
"text": "The technological advancement and sophistication in cameras and gadgets prompt researchers to have focus on image analysis and text understanding. The deep learning techniques demonstrated well to assess the potential for classifying text from natural scene images as reported in recent years. There are variety of deep learning approaches that prospects the detection and recognition of text, effectively from images. In this work, we presented Arabic scene text recognition using Convolutional Neural Networks (ConvNets) as a deep learning classifier. As the scene text data is slanted and skewed, thus to deal with maximum variations, we employ five orientations with respect to single occurrence of a character. The training is formulated by keeping filter size 3 × 3 and 5 × 5 with stride value as 1 and 2. During text classification phase, we trained network with distinct learning rates. Our approach reported encouraging results on recognition of Arabic characters from segmented Arabic scene images.",
"title": ""
},
{
"docid": "c171254eae86ce30c475c4355ed8879f",
"text": "The rapid growth of connected things across the globe has been brought about by the deployment of the Internet of things (IoTs) at home, in organizations and industries. The innovation of smart things is envisioned through various protocols, but the most prevalent protocols are pub-sub protocols such as Message Queue Telemetry Transport (MQTT) and Advanced Message Queuing Protocol (AMQP). An emerging paradigm of communication architecture for IoTs support is Fog computing in which events are processed near to the place they occur for efficient and fast response time. One of the major concerns in the adoption of Fog computing based publishsubscribe protocols for the Internet of things is the lack of security mechanisms because the existing security protocols such as SSL/TSL have a large overhead of computations, storage and communications. To address these issues, we propose a secure, Fog computing based publish-subscribe lightweight protocol using Elliptic Curve Cryptography (ECC) for the Internet of Things. We present analytical proofs and results for resource efficient security, comparing to the existing protocols of traditional Internet.",
"title": ""
},
{
"docid": "b0f292ef7c926040342ff1e813dfc393",
"text": "This paper presents a comprehensive discussion of the Jaya algorithm, a novel approach for the optimization. There exist two broad categories of heuristic algorithms are: evolutionary algorithms and swarm intelligence. These algorithms' performance vastly depends on the parameters used need extensive tuning during the computational experiments to achieve the superior performance. The Jaya algorithm is a new optimization algorithm has been proposed recently is parameter less and therefore parameters tuning is not needed for it. The primary aim of this paper is to discuss the Jaya algorithm based on the rational aspects outlined as: (a) what is Jaya algorithm; (b) How it works and (c) why one should use it. The author believes that this discussion might be useful to explore the potential of the Jaya to the general audience working for the optimization.",
"title": ""
},
{
"docid": "e6bf74c3d0559c886697f06f202224c3",
"text": "n recent years there has been increasing interest in describing complicated information processing systems in terms of the knowledge they have, rather than by the details of their implementation. This requires a means of modeling the knowledge in a system. Several different approaches to knowledge modeling have been developed by researchers working in Artificial Intelligence (AI). Most of these approaches share the view that knowledge must be modeled with respect to a goal or task. In this article , we outline our modeling approach in terms of the notion of a task-structure, which recursively links a task to alternative methods and to their subtasks. Our emphasis is on the notion of modeling domain knowledge using tasks and methods as mediating concepts. We begin by tracing the development of a number of different knowledge-modeling approaches. These approaches share many features, but their differences make it difficult to compare systems that have been modeled using different approaches. We present these approaches and describe their similarities and differences. We then give a detailed description , based on the task structure, of our knowledge-modeling approach and illustrate it with task structures for diagnosis and design. Finally, we show how the task structure can be used to compare and unify the other approaches. A knowledge-based system (KBS) has explicit representations of knowledge as well as inference processes that operate on these representations to achieve a goal. An inference process consists of a number of inference steps, each step creating additional knowledge. The process of applying inference steps is repeated until the information needed to fulfill the requirements of the problem-solving goal or task is generated. Typically, both domain knowledge and possible inference steps have to be modeled and represented in some form. In one sense, knowledge is of general utility-the same piece can be utilized in different contexts and problems; so, unlike traditional procedural approaches, knowledge should not be tied to one task or goal. On the other hand, it is difficult to know what knowledge to put in a system without having an idea of the tasks the KBS will confront. In spite of claims of generality, all KBSs are designed with some task or class of tasks in mind. Similarly, they are designed to be operational across some range of domains. Thus, a clear understanding of the relationship between tasks, knowledge and inferences required to perform the task is needed before knowledge in …",
"title": ""
},
{
"docid": "46e0cfd4cb292331cb1f6a746a3ed3b7",
"text": "Indoor human tracking is fundamental to many real-world applications such as security surveillance, behavioral analysis, and elderly care. Previous solutions usually require dedicated device being carried by the human target, which is inconvenient or even infeasible in scenarios such as elderly care and break-ins. However, compared with device-based tracking, device-free tracking is particularly challenging because the much weaker reflection signals are employed for tracking. The problem becomes even more difficult with commodity Wi-Fi devices, which have limited number of antennas, small bandwidth size, and severe hardware noise.\n In this work, we propose IndoTrack, a device-free indoor human tracking system that utilizes only commodity Wi-Fi devices. IndoTrack is composed of two innovative methods: (1) Doppler-MUSIC is able to extract accurate Doppler velocity information from noisy Wi-Fi Channel State Information (CSI) samples; and (2) Doppler-AoA is able to determine the absolute trajectory of the target by jointly estimating target velocity and location via probabilistic co-modeling of spatial-temporal Doppler and AoA information. Extensive experiments demonstrate that IndoTrack can achieve a 35cm median error in human trajectory estimation, outperforming the state-of-the-art systems and provide accurate location and velocity information for indoor human mobility and behavioral analysis.",
"title": ""
},
{
"docid": "dabe8a7bff4a9d3ba910744804579b74",
"text": "Charitable giving is influenced by many social, psychological, and economic factors. One common way to encourage individuals to donate to charities is by offering to match their contribution (often by their employer or by the government). Conitzer and Sandholm introduced the idea of using auctions to allow individuals to offer to match the contribution of others. We explore this idea in a social network setting, where individuals care about the contribution of their neighbors, and are allowed to specify contributions that are conditional on the contribution of their neighbors.\n We give a mechanism for this setting that raises the largest individually rational contributions given the conditional bids, and analyze the equilibria of this mechanism in the case of linear utilities. We show that if the social network is strongly connected, the mechanism always has an equilibrium that raises the maximum total contribution (which is the contribution computed according to the true utilities); in other words, the price of stability of the game defined by this mechanism is one. Interestingly, although the mechanism is not dominant strategy truthful (and in fact, truthful reporting need not even be a Nash equilibrium of this game), this result shows that the mechanism always has a full-information equilibrium which achieves the same outcome as in the truthful scenario. Of course, there exist cases where the maximum total contribution even with true utilities is zero: we show that the existence of non-zero equilibria can be characterized exactly in terms of the largest eigenvalue of the utility matrix associated with the social network.",
"title": ""
},
{
"docid": "e31af9137176dd39efe0a9e286dd981b",
"text": "This paper presents a novel automated procedure for discovering expressive shape specifications for sophisticated functional data structures. Our approach extracts potential shape predicates based on the definition of constructors of arbitrary user-defined inductive data types, and combines these predicates within an expressive first-order specification language using a lightweight data-driven learning procedure. Notably, this technique requires no programmer annotations, and is equipped with a type-based decision procedure to verify the correctness of discovered specifications. Experimental results indicate that our implementation is both efficient and effective, capable of automatically synthesizing sophisticated shape specifications over a range of complex data types, going well beyond the scope of existing solutions.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "141e3ad8619577140f02a1038981ecb2",
"text": "Sponges are sessile benthic filter-feeding animals, which harbor numerous microorganisms. The enormous diversity and abundance of sponge associated bacteria envisages sponges as hot spots of microbial diversity and dynamics. Many theories were proposed on the ecological implications and mechanism of sponge-microbial association, among these, the biosynthesis of sponge derived bioactive molecules by the symbiotic bacteria is now well-indicated. This phenomenon however, is not exhibited by all marine sponges. Based on the available reports, it has been well established that the sponge associated microbial assemblages keep on changing continuously in response to environmental pressure and/or acquisition of microbes from surrounding seawater or associated macroorganisms. In this review, we have discussed nutritional association of sponges with its symbionts, interaction of sponges with other eukaryotic organisms, dynamics of sponge microbiome and sponge-specific microbial symbionts, sponge-coral association etc.",
"title": ""
},
{
"docid": "43dfbf378a47cadf6868eb9bac22a4cd",
"text": "Maximum power point tracking (MPPT) techniques are employed in photovoltaic (PV) systems to make full utilization of the PV array output power which depends on solar irradiation and ambient temperature. Among all the MPPT strategies, perturbation and observation (P&O) and hill climbing methods are widely applied in the MPPT controllers due to their simplicity and easy implementation. In this paper, both P&O and hill climbing methods are adopted to implement a grid-connected PV system. Their performance is evaluated and compared through theoretical analysis and digital simulation. P&O MPPT method exhibits fast dynamic performance and well regulated PV output voltage, which is more suitable than hill climbing method for grid-connected PV system.",
"title": ""
},
{
"docid": "10318d39b3ad18779accbf29b2f00fcd",
"text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.",
"title": ""
}
] |
scidocsrr
|
ceb54b73265a1e4b83291b7ebc422ee1
|
Learning to detect malicious URLs
|
[
{
"docid": "00410fcb0faa85d5423ccf0a7cc2f727",
"text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today",
"title": ""
}
] |
[
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "34a4643c98d0d756cc06b8bef5e560c6",
"text": "Many real-world optimization problems are large-scale in nature. In order to solve these problems, an optimization algorithm is required that is able to apply a global search regardless of the problems’ particularities. This paper proposes a self-adaptive differential evolution algorithm, called jDElscop, for solving large-scale optimization problems with continuous variables. The proposed algorithm employs three strategies and a population size reduction mechanism. The performance of the jDElscop algorithm is evaluated on a set of benchmark problems provided for the Special Issue on the Scalability of Evolutionary Algorithms and other Metaheuristics for Large Scale Continuous Optimization Problems. Non-parametric statistical procedures were performed for multiple comparisons between the proposed algorithm and three well-known algorithms from literature. The results show that the jDElscop algorithm can deal with large-scale continuous optimization effectively. It also behaves significantly better than other three algorithms used in the comparison, in most cases.",
"title": ""
},
{
"docid": "93780b7740292f368bcb52d9d2ca6ec3",
"text": "Most artworks are explicitly created to evoke a strong emotional response. During the centuries there were several art movements which employed different techniques to achieve emotional expressions conveyed by artworks. Yet people were always consistently able to read the emotional messages even from the most abstract paintings. Can a machine learn what makes an artwork emotional? In this work, we consider a set of 500 abstract paintings from Museum of Modern and Contemporary Art of Trento and Rovereto (MART), where each painting was scored as carrying a positive or negative response on a Likert scale of 1-7. We employ a state-of-the-art recognition system to learn which statistical patterns are associated with positive and negative emotions. Additionally, we dissect the classification machinery to determine which parts of an image evokes what emotions. This opens new opportunities to research why a specific painting is perceived as emotional. We also demonstrate how quantification of evidence for positive and negative emotions can be used to predict the way in which people observe paintings.",
"title": ""
},
{
"docid": "13ac8eddda312bd4ef3ba194c076a6ea",
"text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.",
"title": ""
},
{
"docid": "f898a6d7e3a5e9cced5b9da69ef40204",
"text": "Software readability is a property that influences how easily a given piece of code can be read and understood. Since readability can affect maintainability, quality, etc., programmers are very concerned about the readability of code. If automatic readability checkers could be built, they could be integrated into development tool-chains, and thus continually inform developers about the readability level of the code. Unfortunately, readability is a subjective code property, and not amenable to direct automated measurement. In a recently published study, Buse et al. asked 100 participants to rate code snippets by readability, yielding arguably reliable mean readability scores of each snippet; they then built a fairly complex predictive model for these mean scores using a large, diverse set of directly measurable source code properties. We build on this work: we present a simple, intuitive theory of readability, based on size and code entropy, and show how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies. Our model uses well-known size metrics and Halstead metrics, which are easily extracted using a variety of tools. We argue that this approach provides a more theoretically well-founded, practically usable, approach to readability measurement.",
"title": ""
},
{
"docid": "f65d5366115da23c8acd5bce1f4a9887",
"text": "Effective crisis management has long relied on both the formal and informal response communities. Social media platforms such as Twitter increase the participation of the informal response community in crisis response. Yet, challenges remain in realizing the formal and informal response communities as a cooperative work system. We demonstrate a supportive technology that recognizes the existing capabilities of the informal response community to identify needs (seeker behavior) and provide resources (supplier behavior), using their own terminology. To facilitate awareness and the articulation of work in the formal response community, we present a technology that can bridge the differences in terminology and understanding of the task between the formal and informal response communities. This technology includes our previous work using domain-independent features of conversation to identify indications of coordination within the informal response community. In addition, it includes a domain-dependent analysis of message content (drawing from the ontology of the formal response community and patterns of language usage concerning the transfer of property) to annotate social media messages. The resulting repository of annotated messages is accessible through our social media analysis tool, Twitris. It allows recipients in the formal response community to sort on resource needs and availability along various dimensions including geography and time. Thus, computation indexes the original social media content and enables complex querying to identify contents, players, and locations. Evaluation of the computed annotations for seeker-supplier behavior with human judgment shows fair to moderate agreement. In addition to the potential benefits to the formal emergency response community regarding awareness of the observations and activities of the informal response community, the analysis serves as a point of reference for evaluating more computationally intensive efforts and characterizing the patterns of language behavior during a crisis.",
"title": ""
},
{
"docid": "d61496b6cb9e323ff907ac51ebb7f4a6",
"text": "The reconstruction of a surface model from a point cloud is an important task in the reverse engineering of industrial parts. We aim at constructing a curve network on the point cloud that will define the border of the various surface patches. In this paper, we present an algorithm to extract closed sharp feature lines, which is necessary to create such a closed curve network. We use a first order segmentation to extract candidate feature points and process them as a graph to recover the sharp feature lines. To this end, a minimum spanning tree is constructed and afterwards a reconnection procedure closes the lines. The algorithm is fast and gives good results for real-world point sets from industrial applications. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b14bd3e01eb3f4abfc6d7456cf7fd47",
"text": "Fermented foods and beverages were among the first processed food products consumed by humans. The production of foods such as yogurt and cultured milk, wine and beer, sauerkraut and kimchi, and fermented sausage were initially valued because of their improved shelf life, safety, and organoleptic properties. It is increasingly understood that fermented foods can also have enhanced nutritional and functional properties due to transformation of substrates and formation of bioactive or bioavailable end-products. Many fermented foods also contain living microorganisms of which some are genetically similar to strains used as probiotics. Although only a limited number of clinical studies on fermented foods have been performed, there is evidence that these foods provide health benefits well-beyond the starting food materials.",
"title": ""
},
{
"docid": "9bbc3e426c7602afaa857db85e754229",
"text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.",
"title": ""
},
{
"docid": "e337e9d2b9d4d5d047bb1809d214ee61",
"text": "This low-cost indoor navigation system runs on off-the-shelf camera phones. More than 2,000 users at four different large-scale events have already used it. The system uses built-in cameras to determine user location in real time by detecting unobtrusive fiduciary markers. The required infrastructure is limited to paper markers and static digital maps, and common devices are used, facilitating quick deployment in new environments. The authors have studied the application quantitatively in a controlled environment and qualitatively during deployment at four large international events. According to test users, marker-based navigation is easier to use than conventional mobile digital maps. Moreover, the users' location awareness in navigation tasks improved. Experiences drawn from questionnaires, usage log data, and user interviews further highlight the benefits of this approach.",
"title": ""
},
{
"docid": "2b086723a443020118b7df7f4021b4d9",
"text": "Random undersampling and oversampling are simple but well-known resampling methods applied to solve the problem of class imbalance. In this paper we show that the random oversampling method can produce better classification results than the random undersampling method, since the oversampling can increase the minority class recognition rate by sacrificing less amount of majority class recognition rate than the undersampling method. However, the random oversampling method would increase the computational cost associated with the SVM training largely due to the addition of new training examples. In this paper we present an investigation carried out to develop efficient resampling methods that can produce comparable classification results to the random oversampling results, but with the use of less amount of data. The main idea of the proposed methods is to first select the most informative data examples located closer to the class boundary region by using the separating hyperplane found by training an SVM model on the original imbalanced dataset, and then use only those examples in resampling. We demonstrate that it would be possible to obtain comparable classification results to the random oversampling results through two sets of efficient resampling methods which use 50% less amount of data and 75% less amount of data, respectively, compared to the sizes of the datasets generated by the random oversampling method.",
"title": ""
},
{
"docid": "f9af6cca7d9ac18ace9bc6169b4393cc",
"text": "Metric learning has become a widespreadly used tool in machine learning. To reduce expensive costs brought in by increasing dimensionality, low-rank metric learning arises as it can be more economical in storage and computation. However, existing low-rank metric learning algorithms usually adopt nonconvex objectives, and are hence sensitive to the choice of a heuristic low-rank basis. In this paper, we propose a novel low-rank metric learning algorithm to yield bilinear similarity functions. This algorithm scales linearly with input dimensionality in both space and time, therefore applicable to highdimensional data domains. A convex objective free of heuristics is formulated by leveraging trace norm regularization to promote low-rankness. Crucially, we prove that all globally optimal metric solutions must retain a certain low-rank structure, which enables our algorithm to decompose the high-dimensional learning task into two steps: an SVD-based projection and a metric learning problem with reduced dimensionality. The latter step can be tackled efficiently through employing a linearized Alternating Direction Method of Multipliers. The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions.",
"title": ""
},
{
"docid": "178d4712ef7dfa7a770ce1ebb702b24c",
"text": "In this article we present an overview on the state of the art in games solved in the domain of twoperson zero-sum games with perfect information. The results are summarized and some predictions for the near future are given. The aim of the article is to determine which game characteristics are predominant when the solution of a game is the main target. First, it is concluded that decision complexity is more important than state-space complexity as a determining factor. Second, we conclude that there is a trade-off between knowledge-based methods and brute-force methods. It is shown that knowledge-based methods are more appropriate for solving games with a low decision complexity, while brute-force methods are more appropriate for solving games with a low state-space complexity. Third, we found that there is a clear correlation between the first-player’s initiative and the necessary effort to solve a game. In particular, threat-space-based search methods are sometimes able to exploit the initiative to prove a win. Finally, the most important results of the research involved, the development of new intelligent search methods, are described. 2001 Published by Elsevier Science B.V.",
"title": ""
},
{
"docid": "a03c836f972bd8d91864d939735ed9f6",
"text": "A bow-tie antenna based on a slot configuration in a single metal sheet on top of a very thin flexible substrate is introduced. The antenna is constructed from two slotted right-angle triangles fed by a coplanar waveguide transmission line. The topology is very simple and extremely easy to tune in order to reach the proper characteristics after mounting on a supporting structure. A prototype is designed, fabricated, and characterized experimentally. The tunability is proven by considering both the version in free space and the version for use on a brick wall. Measurements demonstrate good agreement with simulations. Both versions cover the wireless local area network (2.4 and 3.65 GHz) and WiMax (2.3, 2.5, and 3.5 GHz) spectra, with an overall impedance bandwidth of 1.79 GHz (57.7%) and 1.46 GHz (49.7%), respectively. The radiation of the antenna is bidirectional with maximum gains of 6.30 and 5.09 dBi for the free space and brick wall versions, respectively.",
"title": ""
},
{
"docid": "e4694f9cdbc8756398e5996b9cd78989",
"text": "In this paper, a 3D computer vision system for cognitive assessment and rehabilitation based on the Kinect device is presented. It is intended for individuals with body scheme dysfunctions and left-right confusion. The system processes depth information to overcome the shortcomings of a previously presented 2D vision system for the same application. It achieves left and right-hand tracking, and face and facial feature detection (eye, nose, and ears) detection. The system is easily implemented with a consumer-grade computer and an affordable Kinect device and is robust to drastic background and illumination changes. The system was tested and achieved a successful monitoring percentage of 96.28%. The automation of the human body parts motion monitoring, its analysis in relation to the psychomotor exercise indicated to the patient, and the storage of the result of the realization of a set of exercises free the rehabilitation experts of doing such demanding tasks. The vision-based system is potentially applicable to other tasks with minor changes.",
"title": ""
},
{
"docid": "eda6795cb79e912a7818d9970e8ca165",
"text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.",
"title": ""
},
{
"docid": "b85ca4a4b564fcb61001fd13332ddc65",
"text": "Although the archaeological site of Edzná is one of the more accessible Mayan ruins, being located scarcely 60 km to the southeast of the port-city of Campeche, it has until recently escaped the notice which its true significance would seem to merit. Not only does it appear to have been the earliest major Mayan urban center, dating to the middle of the second century before the Christian era and having served as the focus of perhaps as many as 20, 000 inhabitants, but there is also a growing body of evidence to suggest that it played a key role in the development of Mayan astronomy and calendrics. Among the innovations that seemingly had their origin in Edzná are the Maya's fixing of their New Year's Day, the concept of \"year bearers\", and what is probably the oldest lunar observatory in the New World.",
"title": ""
},
{
"docid": "1be8fa2ade3d8547044d06bd07b6fc1e",
"text": "Gastric rupture with necrosis following acute gastric dilatation (AGD) is a rare and potentially fatal event; usually seen in patients with eating disorders such as anorexia nervosa or bulimia. A 12-year-old lean boy with no remarkable medical history was brought to our Emergency Department suffering acute abdominal symptoms. Emergency laparotomy revealed massive gastric dilatation and partial necrosis, with rupture of the anterior wall of the fundus of the stomach. We performed partial gastrectomy and the patient recovered uneventfully. We report this case to demonstrate that AGD and subsequent gastric rupture can occur in patients without any underlying disorders and that just a low body mass index is a risk factor for this potentially fatal condition.",
"title": ""
},
{
"docid": "211037c38a50ff4169f3538c3b6af224",
"text": "In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.",
"title": ""
},
{
"docid": "c9968b5dbe66ad96605c88df9d92a2fb",
"text": "We present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The demographics survey is ongoing (as of November 2017), and the results are available at http://demographics.mturk-tracker.com: we provide an API for researchers to download the survey data. We use techniques from the field of ecology, in particular, the capture-recapture technique, to understand the size and dynamics of the underlying population. We also demonstrate how to model and account for the inherent selection biases in such surveys. Our results indicate that there are more than 100K workers available in Amazon»s crowdsourcing platform, the participation of the workers in the platform follows a heavy-tailed distribution, and at any given time there are more than 2K active workers. We also show that the half-life of a worker on the platform is around 12-18 months and that the rate of arrival of new workers balances the rate of departures, keeping the overall worker population relatively stable. Finally, we demonstrate how we can estimate the biases of different demographics to participate in the survey tasks, and show how to correct such biases. Our methodology is generic and can be applied to any platform where we are interested in understanding the dynamics and demographics of the underlying user population.",
"title": ""
}
] |
scidocsrr
|
e84c8e4b16672d8baa4e370a4dead84d
|
Seq-NMS for Video Object Detection
|
[
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
}
] |
[
{
"docid": "4a741431c708cd92a250bcb91e4f1638",
"text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.",
"title": ""
},
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "62da9a85945652f195086be0ef780827",
"text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.",
"title": ""
},
{
"docid": "23866b968903087ae9b2b18444a0720b",
"text": "This paper presents a monocular vision based 3D bicycle tracking framework for intelligent vehicles based on a detection method exploiting a deformable part model and a tracking method using an Interacting Multiple Model (IMM) algorithm. Bicycle tracking is important because bicycles share the road with vehicles and can move at comparable speeds in urban environments. From a computer vision standpoint, bicycle detection is challenging as bicycle's appearance can change dramatically between viewpoints and a person riding on the bicycle is a non-rigid object. To this end, we present a tracking-by-detection method to detect and track bicycles that takes into account these difficult issues. First, a mixture model of multiple viewpoints is defined and trained via a Latent Support Vector Machine (LSVM) to detect bicycles under a variety of circumstances. Each model uses a part-based representation. This robust bicycle detector provides a series of measurements (i.e., bounding boxes) in the context of the Kalman filter. Second, to exploit the unique characteristics of bicycle tracking, two motion models based on bicycle's kinematics are fused using an IMM algorithm. For each motion model, an extended Kalman filter (EKF) is used to estimate the position and velocity of a bicycle in the vehicle coordinates. Finally, a single bicycle tracking method using an IMM algorithm is extended to that of multiple bicycle tracking by incorporating a Rao-Blackwellized Particle Filter which runs a particle filter for a data association and an IMM filter for each bicycle tracking. We demonstrate the effectiveness of this approach through a series of experiments run on a new bicycle dataset captured from a vehicle-mounted camera.",
"title": ""
},
{
"docid": "4e7106a78dcf6995090669b9a25c9551",
"text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.",
"title": ""
},
{
"docid": "96b4e076448b9db96eae08620fdac98c",
"text": "Incident Response has always been an important aspect of Information Security but it is often overlooked by security administrators. Responding to an incident is not solely a technical issue but has many management, legal, technical and social aspects that are presented in this paper. We propose a detailed management framework along with a complete structured methodology that contains best practices and recommendations for appropriately handling a security incident. We also present the state-of-the art technology in computer, network and software forensics as well as automated trace-back artifacts, schemas and protocols. Finally, we propose a generic Incident Response process within a corporate environment. © 2005 Elsevier Science. All rights reserved",
"title": ""
},
{
"docid": "08bf0d5065ce44e4b15cd2a982f440d2",
"text": "In this paper we present a hybrid approach for automatic composition of web services that generates semantic input-output based compositions with optimal end-to-end QoS, minimizing the number of services of the resulting composition. The proposed approach has four main steps: 1) generation of the composition graph for a request; 2) computation of the optimal composition that minimizes a single objective QoS function; 3) multi-step optimizations to reduce the search space by identifying equivalent and dominated services; and 4) hybrid local-global search to extract the optimal QoS with the minimum number of services. An extensive validation with the datasets of the Web Service Challenge 2009-2010 and randomly generated datasets shows that: 1) the combination of local and global optimization is a general and powerful technique to extract optimal compositions in diverse scenarios; and 2) the hybrid strategy performs better than the state-of-the-art, obtaining solutions with less services and optimal QoS.",
"title": ""
},
{
"docid": "d6a6cadd782762e4591447b7dd2c870a",
"text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.",
"title": ""
},
{
"docid": "457f2508c59daaae9af818f8a6a963d1",
"text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.",
"title": ""
},
{
"docid": "39fc7b710a6d8b0fdbc568b48221de5d",
"text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.",
"title": ""
},
{
"docid": "9a5f5e43ac46255445268d4298af0a4c",
"text": "Object removal is a topic highly involved in a wide range of image reconstruction applications such as restoration of corrupted or defected images, scene reconstruction, and film post-production. In recent years, there have been many efforts in the industry and academia to develop better algorithms for this subject. This paper discusses some of the recent work and various techniques currently adopted in this field and presents our algorithmic design that enhance the existing pixel-filling framework, and our post-inpaint refinement steps. This paper will further layout the implementation details and experimental results of our algorithm, on a mixture of images from both the standard image processing study papers and our own photo library. Results from our proposed methods will be evaluated and compared to the previous works in academia and other state-of-the-art approaches, with elaboration on the advantages and disadvantages. This paper will conclude with discussing some of the challenges encountered during the design and experiment phases and proposing potential steps to take in the future.",
"title": ""
},
{
"docid": "76afcc3dfbb06f2796b61c8b5b424ad8",
"text": "Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.",
"title": ""
},
{
"docid": "e75ec4137b0c559a1c375d97993448b0",
"text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.",
"title": ""
},
{
"docid": "a3ace9ac6ae3f3d2dd7e02bd158a5981",
"text": "The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. Thesis Supervisor: David R. Karger Title: Associate Professor",
"title": ""
},
{
"docid": "db383295c34b919b2e2e859cfdf82fc2",
"text": "Wafer level packages (WLPs) with various design configurations are rapidly gaining tremendous applications throughout semiconductor industry due to small-form factor, low-cost, and high performance. Because of the innovative production processes utilized in WLP manufacturing and the accompanying rise in the price of gold, the traditional wire bonding packages are no longer as attractive as they used to be. In addition, WLPs provide the smallest form factor to satisfy multifunctional device requirements along with improved signal integrity for today’s handheld electronics. Existing wire bonding devices can be easily converted to WLPs by adding a redistribution layer (RDL) during backend wafer level processing. Since the input/output (I/O) pads do not have to be routed to the perimeter of the die, the WLP die can be designed to have a much smaller footprint as compared to its wire bonding counterpart, which means more area-array dies can be packed onto a single wafer to reduce overall processing costs per die. Conventional (fan-in) WLPs are formed on the dies while they are still on the uncut wafer. The result is that the final packaged product is the same size as the die itself. Recently, fan-out WLPs have emerged. Fan-out WLP starts with the reconstitution or reconfiguration of individual dies to an artificial molded wafer. Fan-out WLPs eliminate the need of expensive substrate as in flip-chip packages, while expanding the WLP size with molding compound for higher I/O applications without compromising on the board level reliability. Essentially, WLP enables the next generation of portable electronics at a competitive price. Many future products using through-silicon-via (TSV) technology will be packaged as WLPs. There have been relatively few publications focused on the latest results of WLP development and research. Many design guidelines, such as material selection and geometry dimensions of under bump metallurgy (UBM), RDL, passivation and solder alloy, for optimum board level reliability performance of WLPs, are still based on technical know-how gained from flip-chip or wire bonding BGA reliability studies published in the past two decades. However, WLPs have their unique product requirements for design guidelines, process conditions, material selection, reliability tests, and failure analysis. In addition, WLP is also an enabling technology for 3D package and system-in-package (SIP), justifying significant research attention. The timing is therefore ripe for this edition to summarize the state-of-the-art research advances in wafer level packaging in various fields of interest. Integration of WLP in 3D packages with TSV or wireless proximity communication (PxC), as well as applications in Microelectromechanical Systems (MEMS) packaging and power packaging, will be highlighted in this issue. In addition, the stateof-the-art simulation is applied to design for enhanced package and board level reliability of WLPs, including thermal cycling test,",
"title": ""
},
{
"docid": "9fe531efea8a42f4fff1fe0465493223",
"text": "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolutional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.",
"title": ""
},
{
"docid": "e07198de4fe8ea55f2c04ba5b6e9423a",
"text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.",
"title": ""
},
{
"docid": "dbcef163643232313207cd45402158de",
"text": "Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity.",
"title": ""
},
{
"docid": "fe687739626916780ff22d95cf89f758",
"text": "In this paper, we address the problem of jointly summarizing large sets of Flickr images and YouTube videos. Starting from the intuition that the characteristics of the two media types are different yet complementary, we develop a fast and easily-parallelizable approach for creating not only high-quality video summaries but also novel structural summaries of online images as storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in a form of a branching network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with assistance of videos. For evaluation, we collect the datasets of 20 outdoor activities, consisting of 2.7M Flickr images and 16K YouTube videos. Due to the large-scale nature of our problem, we evaluate our algorithm via crowdsourcing using Amazon Mechanical Turk. In our experiments, we demonstrate that the proposed joint summarization approach outperforms other baselines and our own methods using videos or images only.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] |
scidocsrr
|
38de4bf2a344a3623f51453818cd0939
|
Highway Vehicle Counting in Compressed Domain
|
[
{
"docid": "b6260c8d87bdab38bbebb821def51f6b",
"text": "The understanding of crowd behaviour in semi-confined spaces is an important part of the design of new pedestrian facilities, for major layout modifications to existing areas and for the daily management of sites subject to crowd traffic. Conventional manual measurement techniques are not suitable for comprehensive data collection of patterns of site occupation and movement. Real-time monitoring is tedious and tiring, but safety-critical. This article presents some image processing techniques which, using existing closed-circuit television systems, can support both data collection and on-line monitoring of crowds. The application of these methods could lead to a better understanding of crowd behaviour, improved design of the built environment and increased pedestrian safety.",
"title": ""
},
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
}
] |
[
{
"docid": "afa3fa35061b54c1ca662f0885b2e4be",
"text": "This paper discusses an analytical study that quantifies the expected earthquake-induced losses in typical office steel frame buildings designed with perimeter special moment frames in highly seismic regions. It is shown that for seismic events associated with low probabilities of occurrence, losses due to demolition and collapse may be significantly overestimated when the expected loss computations are based on analytical models that ignore the composite beam effects and the interior gravity framing system of a steel frame building. For frequently occurring seismic events building losses are dominated by non-structural content repairs. In this case, the choice of the analytical model representation of the steel frame building becomes less important. Losses due to demolition and collapse in steel frame buildings with special moment frames designed with strong-column/weak-beam ratio larger than 2.0 are reduced by a factor of two compared with those in the same frames designed with a strong-column/weak-beam ratio larger than 1.0 as recommended in ANSI/AISC-341-10. The expected annual losses (EALs) of steel frame buildings with SMFs vary from 0.38% to 0.74% over the building life expectancy. The EALs are dominated by repairs of accelerationsensitive non-structural content followed by repairs of drift-sensitive non-structural components. It is found that the effect of strong-column/weak-beam ratio on EALs is negligible. This is not the case when the present value of life-cycle costs is selected as a loss-metric. It is advisable to employ a combination of loss-metrics to assess the earthquake-induced losses in steel frame buildings with special moment frames depending on the seismic performance level of interest. Copyright c © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "32a45d3c08e24d29ad5f9693253c0e9e",
"text": "This paper presents comparative study of high-speed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR design full adder circuits in a single unit. A low power and high performance 9T full adder cell using a design style called “XOR (3T)” is discussed. The designed circuit commands a high degree of regularity and symmetric higher density than the conventional CMOS design style as well as it lowers power consumption by using XOR (3T) logic circuits. Gate Diffusion Input (GDI) technique of low-power digital combinatorial circuit design is also described. This technique helps in reducing the power consumption and the area of digital circuits while maintaining low complexity of logic design. This paper analyses, evaluates and compares the performance of various adder circuits. Several simulations conducted using different voltage supplies, load capacitors and temperature variation demonstrate the superiority of the XOR (3T) based full adder designs in term of delay, power and power delay product (PDP) compared to the other full adder circuits. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid full adder circuits in terms of power, delay and power delay product (PDP). .",
"title": ""
},
{
"docid": "a1507122f15eed60ee77b0829599827b",
"text": "In this letter, we present a substrate-integrated waveguide (SIW) to air-filled rectangular waveguide (WG) transition in multilayer liquid crystal polymer substrates at W-band. The proposed transition is achieved by using a SIW-fed linearly flared antipodal slot line inserted into a WG. To minimize packaging leakage, no guided vias are utilized in the slot line region. Full-wave simulation reveals a low insertion loss of 0.8 dB for the proposed transition at W-band. To experimentally demonstrate this design, a WG-to-SIW-to-WG transition was fabricated and characterized. The measured insertion and reflection loss of the back-to-back transition is less than 1.8 and −11 dB, respectively, at W-band.",
"title": ""
},
{
"docid": "19075b16bbae94d024e4cdeaa7f6427e",
"text": "Nutrient timing is a popular nutritional strategy that involves the consumption of combinations of nutrients--primarily protein and carbohydrate--in and around an exercise session. Some have claimed that this approach can produce dramatic improvements in body composition. It has even been postulated that the timing of nutritional consumption may be more important than the absolute daily intake of nutrients. The post-exercise period is widely considered the most critical part of nutrient timing. Theoretically, consuming the proper ratio of nutrients during this time not only initiates the rebuilding of damaged muscle tissue and restoration of energy reserves, but it Does So in a supercompensated fashion that enhances both body composition and exercise performance. Several researchers have made reference to an anabolic \"window of opportunity\" whereby a limited time exists after training to optimize training-related muscular adaptations. However, the importance - and even the existence - of a post-exercise 'window' can vary according to a number of factors. Not only is nutrient timing research open to question in terms of applicability, but recent evidence has directly challenged the classical view of the relevance of post-exercise nutritional intake with respect to anabolism. Therefore, the purpose of this paper will be twofold: 1) to review the existing literature on the effects of nutrient timing with respect to post-exercise muscular adaptations, and; 2) to draw relevant conclusions that allow practical, evidence-based nutritional recommendations to be made for maximizing the anabolic response to exercise.",
"title": ""
},
{
"docid": "f95f65ae362ceeaaa924c33e02899553",
"text": "The massive proliferation of affordable computers, Internet broadband connectivity and rich education content has created a global phenomenon in which information an d communication technology (ICT) is being used to tra nsform education. Therefore, there is a need to redesign t he educational system to meet the needs better. The advent of comp uters with sophisticated software has made it possible to solv e many complex problems very fast and at a lower cost. This paper introduces the characteristics of the current E-Learning and then analyses the concept of cloud computing and describes the archit e ture of cloud computing platform by combining the features of E-L earning. The authors have tried to introduce cloud computing to e-learning, build an e-learning cloud, and make an active research an d exploration for it from the following aspects: architecture, const ruc ion method and external interface with the model. Keywords—Architecture, Cloud Computing, E-learning, Information Technology",
"title": ""
},
{
"docid": "6e5115960e61b9e85bb9a171e4860cad",
"text": "Extracting semantic relationships between entities is challenging because of a paucity of annotated data and the errors induced by entity detection modules. We employ Maximum Entropy models to combine diverse lexical, syntactic and semantic features derived from the text. Our system obtained competitive results in the Automatic Content Extraction (ACE) evaluation. Here we present our general approach and describe our ACE results.",
"title": ""
},
{
"docid": "659bf57d758481a4920f1ba203012895",
"text": "A photovoltaic (PV) panel model is at the heart of an accurate performance model for a large PV farm. This paper presents an algorithm to calculate the parameters of the one-diode model of PV modules based solely on the manufacturer's datasheets. The important feature of this algorithm is that through reformulation of the characteristic equations at various points of the current-voltage (I-V) curve, the unknown model parameters can be determined analytically. This is in contrast to many existing models which choose a value for one parameter and then calculate the other parameters through simultaneous solution of the system of equations. The calculated I-V curve is then compared with the manufacturer's curve to validate the proposed algorithm and quantify the modeling error.",
"title": ""
},
{
"docid": "b6f04270b265cd5a0bb7d0f9542168fb",
"text": "This paper presents design and manufacturing procedure of a tele-operative rescue robot. First, the general task to be performed by such a robot is defined, and variant kinematic mechanisms to form the basic structure of the robot are discussed. Choosing an appropriate mechanism, geometric dimensions, and mass properties are detailed to develop a dynamics model for the system. Next, the strength of each component is analyzed to finalize its shape. To complete the design procedure, Patran/Nastran was used to apply the finite element method for strength analysis of complicated parts. Also, ADAMS was used to model the mechanisms, where 3D sketch of each component of the robot was generated by means of Solidworks, and several sets of equations governing the dimensions of system were solved using Matlab. Finally, the components are fabricated and assembled together with controlling hardware. Two main processors are used within the control system of the robot. The operator's PC as the master processor and the laptop installed on the robot as the slave processor. The performance of the system was demonstrated in Rescue robot league of RoboCup 2005 in Osaka (Japan) and achieved the 2nd best design award",
"title": ""
},
{
"docid": "c5e0ba5e8ceb8c684366b4aae1a43dc2",
"text": "This document proposes to make a contribution to the conceptualization and implementation of data recovery techniques through the abstraction of recovery methodologies and aspects that influence the process, relating human motivation to research needs, whether these are for the Auditing or computer science, allowing to generate classification of recovery techniques in the absence of the metadata provided by the filesystem, in this sense have been proposed to file carving techniques as a solution option. Finally, it is revealed that while many file carving techniques are being implemented in other tools, they are still in the research phase.",
"title": ""
},
{
"docid": "ae7fe70c3774a9496522afe37ddd011c",
"text": "Forum threads are lengthy and rich in content. Concise thread summaries will benefit both newcomers seeking information and those who participate in the discussion. Few studies, however, have examined the task of forum thread summarization. In this work we make the first attempt to adapt the hierarchical attention networks for thread summarization. The model draws on the recent development of neural attention mechanisms to build sentence and thread representations and use them for summarization. Our results indicate that the proposed approach can outperform a range of competitive baselines. Further, a redundancy removal step is crucial for achieving outstanding results.",
"title": ""
},
{
"docid": "5b8a6629c10a06e6b4e79399960d8b03",
"text": "Results of an experimental investigation of structural response of Shear failure is generally brittle in concrete structures. shear beams made of a special class of cementitious composites, Examples of concrete structural failure related to shear referred to as engineered cementitious composites (ECCs), are reloading include bridge deck punching failure [4], corported. ECCs are designed with tailored material structure and have bel failure [5], anchor bolt pull-out [6], and segmental been shoum to exhibit pseudo strain-hardening tensile behavior. The bridge shear key failure [7]. A goal of the work preimproved performance in shear over conventional plain, fibersented here is to modify the brittle failure mode by reinforced, and wire mesh reinforced concrete is demonstrated. It is taking advantage of the unique material behavior of suggested that ECCs can be utilized for structural applications ECCs. There has been a lot of research into the use of where superior ductility and durability performance are desired, fibers as a replacement for shear reinforcement to enADVANCED CEMENT BASED MATERIALS 1993, 1, 142--149 hance shear capacity of concrete beams to ensure a",
"title": ""
},
{
"docid": "b8b75ba5d3bddf88869fe1fc3f5d5076",
"text": "This paper presents a LabVIEW-aided PID designed controller to monitor DC motor speed and uses the software simulation of VisSim to analysis its response. First, design the drive circuit of DC motor and as the feedback signal through the photo sensor and 8051 chip modules to produce rotational speed signal, and show from SEG7 displays. To take out the analog signal through D/A converter at the same time, acquire the signal via the NIDAQ USB-6008 card. By the LabVIEW-aided PID controller, the parameters are adjusted to control the motor speed. The front panel will display the speed of DC motor on the screen. The simulation results are quite match with the theoretical prediction for the behavior of the PID controller. In this proposed paper, it demonstrates the humanized operation interface that not only can replace the traditional instrument, but also facilitate the amateur engineer's operation under the remote control and monitor.",
"title": ""
},
{
"docid": "ac48f87cee17829e3403a080dc077fd9",
"text": "Common Public Radio Interface (CPRI) is a successful industry cooperation defining the publicly available specification for the key internal interface of radio base stations between the radio equipment control (REC) and the radio equipment (RE) in the fronthaul of mobile networks. However, CPRI is expensive to deploy, consumes large bandwidth, and currently is statically configured. On the other hand, an Ethernet-based mobile fronthaul will be cost-efficient and more easily reconfigurable. Encapsulating CPRI over Ethernet (CoE) is an attractive solution, but stringent CPRI requirements such as delay and jitter are major challenges that need to be met to make CoE a reality. This study investigates whether CoE can meet delay and jitter requirements by performing FPGA-based Verilog experiments and simulations. Verilog experiments show that CoE encapsulation with fixed Ethernet frame size requires about tens of microseconds. Numerical experiments show that the proposed scheduling policy of CoE flows on Ethernet can reduce jitter when redundant Ethernet capacity is provided. The reduction in jitter can be as large as 1 μs, hence making Ethernet-based mobile fronthaul a credible technology.",
"title": ""
},
{
"docid": "1c9c93d1eff3904941516516a6390cdf",
"text": "BACKGROUND\nSyndesmosis sprains can contribute to chronic pain and instability, which are often indications for surgical intervention. The literature lacks sufficient objective data detailing the complex anatomy and localized osseous landmarks essential for current surgical techniques.\n\n\nPURPOSE\nTo qualitatively and quantitatively analyze the anatomy of the 3 syndesmotic ligaments with respect to surgically identifiable bony landmarks.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nSixteen ankle specimens were dissected to identify the anterior inferior tibiofibular ligament (AITFL), posterior inferior tibiofibular ligament (PITFL), interosseous tibiofibular ligament (ITFL), and bony anatomy. Ligament lengths, footprints, and orientations were measured in reference to bony landmarks by use of an anatomically based coordinate system and a 3-dimensional coordinate measuring device.\n\n\nRESULTS\nThe syndesmotic ligaments were identified in all specimens. The pyramidal-shaped ITFL was the broadest, originating from the distal interosseous membrane expansion, extending distally, and terminating 9.3 mm (95% CI, 8.3-10.2 mm) proximal to the central plafond. The tibial cartilage extended 3.6 mm (95% CI, 2.8-4.4 mm) above the plafond, a subset of which articulated directly with the fibular cartilage located 5.2 mm (95% CI, 4.6-5.8 mm) posterior to the anterolateral corner of the tibial plafond. The primary AITFL band(s) originated from the tibia 9.3 mm (95% CI, 8.6-10.0 mm) superior and medial to the anterolateral corner of the tibial plafond and inserted on the fibula 30.5 mm (95% CI, 28.5-32.4 mm) proximal and anterior to the inferior tip of the lateral malleolus. Superficial fibers of the PITFL originated along the distolateral border of the posterolateral tubercle of the tibia 8.0 mm (95% CI, 7.5-8.4 mm) proximal and medial to the posterolateral corner of the plafond and inserted along the medial border of the peroneal groove 26.3 mm (95% CI, 24.5-28.1 mm) superior and posterior to the inferior tip of the lateral malleolus.\n\n\nCONCLUSION\nThe qualitative and quantitative anatomy of the syndesmotic ligaments was reproducibly described and defined with respect to surgically identifiable bony prominences.\n\n\nCLINICAL RELEVANCE\nData regarding anatomic attachment sites and distances to bony prominences can optimize current surgical fixation techniques, improve anatomic restoration, and reduce the risk of iatrogenic injury from malreduction or misplaced implants. Quantitative data also provide the consistency required for the development of anatomic reconstructions.",
"title": ""
},
{
"docid": "c0f68f1b8b6fee87203f62baf133b793",
"text": "Modern PWM inverter output voltage has high dv/dt, which causes problems such as voltage doubling that can lead to insulation failure, ground currents that results in electromagnetic interference concerns. The IGBT switching device used in such inverter are becoming faster, exacerbating these problems. This paper proposes a new procedure for designing the LC clamp filter. The filter increases the rise time of the output voltage of inverter, resulting in smaller dv/dt. In addition suitable selection of resonance frequency gives LCL filter configuration with improved attenuation. By adding this filter at output terminal of inverter which uses long cable, voltage doubling effect is reduced at the motor terminal. The design procedure is carried out in terms of the power converter based per unit scheme. This generalizes the design procedure to a wide range of power level and to study optimum designs. The effectiveness of the design is verified by computer simulation and experimental measurements.",
"title": ""
},
{
"docid": "f001f2933b3c96fe6954e086488776e0",
"text": "Pd coated copper (PCC) wire and Au-Pd coated copper (APC) wire have been widely used in the field of LSI device. Recently, higher bond reliability at high temperature becomes increasingly important for on-vehicle devices. However, it has been reported that conventional PCC wire caused a bond failure at elevated temperatures. On the other hand, new-APC wire had higher reliability at higher temperature than conventional APC wire. New-APC wire has higher concentration of added element than conventional APC wire. In this paper, failure mechanism of conventional APC wire and improved mechanism of new-APC wire at high temperature were shown. New-APC wire is suitable for onvehicle devices.",
"title": ""
},
{
"docid": "8ec5b8ed868f7f413e50cfa18c5510f3",
"text": "In recent years, we have seen the emergence of multi-GS/s medium-to-high-resolution ADCs. Presently, SAR ADCs dominate low-speed applications and time-interleaved SARs are becoming increasingly popular for high-speed ADCs [1,2]. However the SAR architecture faces two key problems in simultaneously achieving multi-GS/s sample rates and high resolution: (1) the fundamental trade-off of comparator noise and speed is limiting the speed of single-channel SARs, and (2) highly time-interleaved ADCs introduce complex lane-to-lane mismatches that are difficult to calibrate with high accuracy. Therefore, pipelined [3] and pipelined-SAR [4] remain the most common architectural choices for high-speed high-resolution ADCs. In this work, a pipelined ADC achieves 4GS/s sample rate, using a 4-step capacitor and amplifier-sharing front-end MDAC architecture with 4-way sampling to reduce noise, distortion and power, while overcoming common issues for SHA-less ADCs.",
"title": ""
},
{
"docid": "d3f35e91d5d022de5fe816cf1234e415",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "2f742514ffec09ea1abf2d846ba630e1",
"text": "A number of high-level query languages, such as Hive, Pig, Flume, and Jaql, have been developed in recent years to increase analyst productivity when processing and analyzing very large datasets. The implementation of each of these languages includes a complete, data model-dependent query compiler, yet each involves a number of similar optimizations. In this work, we describe a new query compiler architecture that separates language-specific and data model-dependent aspects from a more general query compiler backend that can generate executable data-parallel programs for shared-nothing clusters and can be used to develop multiple languages with different data models. We have built such a data model-agnostic query compiler substrate, called Algebricks, and have used it to implement three different query languages --- HiveQL, AQL, and XQuery --- to validate the efficacy of this approach. Experiments show that all three query languages benefit from the parallelization and optimization that Algebricks provides and thus have good parallel speedup and scaleup characteristics for large datasets.",
"title": ""
},
{
"docid": "488b0adfe43fc4dbd9412d57284fc856",
"text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.",
"title": ""
}
] |
scidocsrr
|
8e32a3c6a2dd9c5738dc92599bd954f1
|
"Facebook depression?" social networking site use and depression in older adolescents.
|
[
{
"docid": "dde075f427d729d028d6d382670f8346",
"text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.",
"title": ""
}
] |
[
{
"docid": "12c7d6a47de9c56b852881b04eb3f32f",
"text": "Gaussian functions are suitable for describing many processes in mathematics, science, and engineering, making them very useful in the fields of signal and image processing. For example, the random noise in a signal, induced by complicated physical factors, can be simply modeled with the Gaussian distribution according to the central limit theorem from the probability theory.",
"title": ""
},
{
"docid": "3d3f5b45b939f926d1083bab9015e548",
"text": "Industry is facing an era characterised by unpredictable market changes and by a turbulent competitive environment. The key to compete in such a context is to achieve high degrees of responsiveness by means of high flexibility and rapid reconfiguration capabilities. The deployment of modular solutions seems to be part of the answer to face these challenges. Semantic modelling and ontologies may represent the needed knowledge representation to support flexibility and modularity of production systems, when designing a new system or when reconfiguring an existing one. Although numerous ontologies for production systems have been developed in the past years, they mainly focus on discrete manufacturing, while logistics aspects, such as those related to internal logistics and warehousing, have not received the same attention. The paper aims at offering a representation of logistics aspects, reflecting what has become a de-facto standard terminology in industry and among researchers in the field. Such representation is to be used as an extension to the already-existing production systems ontologies that are more focused on manufacturing processes. The paper presents the structure of the hierarchical relations within the examined internal logistics elements, namely Storage and Transporters, structuring them in a series of classes and sub-classes, suggesting also the relationships and the attributes to be considered to complete the modelling. Finally, the paper proposes an industrial example with a miniload system to show how such a modelling of internal logistics elements could be instanced in the real world. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "aa253406afd52c172885d9bd01e6451d",
"text": "Crop yield forecasting during the growing season is useful for farming planning and management practices as well as for planning humanitarian aid in developing countries. Common approaches to yield forecast include the use of expensive manual surveys or accessible remote sensing data. Traditional remote sensing based approaches to predict crop yield consist of classical Machine Learning techniques such as Support Vector Machines and Decision Trees. More recent approaches include using deep neural network models, such as CNN and LSTM. We identify the additional gaps in the literature of existing machine learning methods as lacking of (1) standardized training protocol that specifies the optimal time frame, both in terms of years and months of each year, to be considered in the training set, (2) verified applicability to developing countries under the condition of scarce data, and (3) effective utilization of spatial features in remote sensing images. In this thesis, we first replicate the state-of-the-art approach of You et al. [1], in particular their CNN model for crop yield prediction. To tackle the first identified gap, we then perform control experiments to determine the best temporal training settings for soybean yield prediction. To probe the second gap, we further investigate whether this CNN model could be trained on source locations and then be transfered to new target locations and conclude that it is necessary to use source regions that have a similar or generalizable ecosystem to the target regions. This allows us to assess the transferability of CNN-based regression models to developing countries, where little training data is available. Additionally, we propose a novel 3D CNN model for crop yield prediction task that leverages the spatiotemporal features. We demonstrate that our 3D CNN outperforms all competing machine learning methods, shedding light on promising future directions in utilizing deep learning tools for crop yield prediction.",
"title": ""
},
{
"docid": "177c86301b4dec3a8d86119520a0cb70",
"text": "This paper considers city-wide air quality estimation with limited available monitoring stations which are geographically sparse. Since air pollution is highly spatio-temporal (S-T) dependent and considerably influenced by urban dynamics (e.g., meteorology and traffic), we can infer the air quality not covered by monitoring stations with S-T heterogeneous urban big data. However, estimating air quality using S-T heterogeneous big data poses two challenges. The first challenge is due to with the data diversity, i.e., there are different categories of urban dynamics and some may be useless and even detrimental for the estimation. To overcome this, we first propose an S-T extended Granger causality model to analyze all the causalities among urban dynamics in a consistent manner. Then by implementing non-causality test, we rule out the urban dynamics that do not “Granger” cause air pollution. The second challenge is due to the time complexity when processing the massive volume of data. We propose to discover the region of influence (ROI) by selecting data with the highest causality levels spatially and temporally. Results show that we achieve higher accuracy using “part” of the data than “all” of the data. This may be explained by the most influential data eliminating errors induced by redundant or noisy data. The causality model observation and the city-wide air quality map are illustrated and visualized using data from Shenzhen, China.",
"title": ""
},
{
"docid": "9f18fbdbf3ae3f33702a60895cbcc22b",
"text": "Existing studies indicate that there exists strong correlation between personality and personal preference, thus personality could potentially be used to build more personalized recommender system. Personality traits are mainly measured by psychological questionnaires, and it is hard to obtain personality traits of large amount of users in real-world scenes.In this paper, we propose a new approach to automatically identify personality traits with Social Media contents in Chinese language environments. Social Media content features were extracted from 1766 Sina micro blog users, and the predicting model is trained with machine learning algorithms.The experimental results demonstrate that users' personality traits could be predicted from Social Media contents with acceptable Pearson Correlation, which makes it possible to develop user profiles for recommender system. In future, user profiles with predicted personality traits would be used to enhance the performance of existing personalized recommendation systems.",
"title": ""
},
{
"docid": "2c69729c72935eae8889843f9aee5f6b",
"text": "Some students, for a variety of factors, struggle to complete high school on time. To address this problem, school districts across the U.S. use intervention programs to help struggling students get back on track academically. Yet in order to best apply those programs, schools need to identify off-track students as early as possible and enroll them in the most appropriate intervention. Unfortunately, identifying and prioritizing students in need of intervention remains a challenging task. This paper describes work that builds on current systems by using advanced data science methods to produce an extensible and scalable predictive framework for providing partner U.S. public school districts with individual early warning indicator systems. Our framework employs machine learning techniques to identify struggling students and describe features that are useful for this task, evaluating these techniques using metrics important to school administrators. By doing so, our framework, developed with the common need of several school districts in mind, provides a common set of tools for identifying struggling students and the factors associated with their struggles. Further, by integrating data from disparate districts into a common system, our framework enables cross-district analyses to investigate common early warning indicators not just within a single school or district, but across the U.S. and beyond.",
"title": ""
},
{
"docid": "c6bd4cd6f90abf20f2619b1d1af33680",
"text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.",
"title": ""
},
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
},
{
"docid": "c351576cfab0b06a9e34beb14c895601",
"text": "In this paper the authors mainly aim at describing some organizational features of a particular kind of social enterprises that have emerged since the development of web 2.0: peer to peer charities and e-social banking. They will define first the traditional social enterprise and how this phenomenon has evolved in recent years. Then they will explain how the philosophy of Web 2.0 offers new opportunities for the development and growth of these social initiatives. Thirdly, they will detail their main features obtained from the study of twelve inititatives – the most relevant at present – which they have called 2.0 social enterprises (peer to peer charities and e-social banking). The authors will finally offer some reflection on main dilemmas and challenges that could be faced in a short term future. DOI:10.4018/978-1-61520-597-4.ch007 International Journal of E-Entrepreneurship and Innovation, 1(3), 32-47, July-September 2010 33 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. entities that perform their work in developing countries, either directly or through local or international non-governmental organizations (NGOs). It is in this context where we can be locate peer to peer Charities, entities undoubtedly characterized by a social sense -their main targets are people with scarce resources, but very different from social enterprises of the end of the centurywhich are sources of funding for entrepreneurship, among other aspects. In third place we will mention the main characteristics of peer to peer Charities and social banking, using data obtained from various sources: primary and secondary. We will attempt to describe its founders are, their organizational design, processes and growth strategies, among others. Finally, as a result of obtained data and exposed features, we will propose a series of questions or dilemmas that can influence the development of such organizations in a short-term future because of their special characteristics. THE TRADITIONAL sOCIAL COMPANY As A sTARTING POINT Main Features of the “Traditional” social Entrepreneurship The term social entrepreneurship began to appear routinely both in the general-interest and specialist press in the early 1990s. First descriptions of social entrepreneurs ranged from “anyone who starts a not-forprofit” or “not-for-profit organizations starting for-profit or earned-income ventures” (Wolk, 2007) to “business owners who integrate social responsibility into their operations” (Dees, 2001). Famous are those social entreprises specialized in recycling such as Green Works in England (Clifford, & Dixon, 2005) or in biodynamic agriculture as Sekem (Seeclos, & Mahir, 2003). Although there is no universally agreed definition of a social enterprise, there appears to be a general consensus that it is a business with primarily social and/or environmental objectives, whose surpluses are principally reinvested for that purpose either in the business and/or a community rather than being driven by the need to maximize profits for shareholders and owners (DTI, 2002). In this definition, the following features are highlighted: double bottom line, commercial and autonomy orientation. Double and Triple Bottom Line Social enterprises can be distinguished from other nonprofits organizations by their strategies, structure and values (Dart, 2004). Social enterprises have two basic objetives, social and economic, which are integrated into their business strategy. Therefore, some authors suggested that the definition of entrepreneurship should be modified to include the creation of ‘social and economic value’ and thus applied to both private, entrepreneurial ventures as well as to social enterprises (Chell, 2007). This way, it is acknowledged that contributions in the creation of companies are not only economic but also social, which in the case of social enterprises will be obviously larger. Nowadays, the concept of the bottom line has expanded to include environmental (triple bottom line) outcomes. This trend has called the attention of policy makers and practitioners who are interested in the potential contribution of social enterprises to economic, social and/ or environmental regeneration and renewal. Business and Autonomy Orientation The specific objetives of social enterprises need different sources of income: businesses and nonprofit organizations (Hansmann, 1980). Donative nonprofits obtain their funds from donations and philanthropy, whereas social enterprise and nonprofits enterprises generate at least some of their revenue from trading (Figure 1). If they are in competition with other nonprofit and/or for-profit organizations for resources and customers (Hansmann, 1980; Steinberg, 1993), then their tax and fiscal benefits (Glaeser, & Shleifer, 2001) and close stakeholder relationships have the potential to be exploited to generate competitive advantage. 14 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/social-entrepreneurship-socialinnovation/51593?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Business, Administration, and Management, InfoSci-Digital Marketing, E-Business, and E-Services eJournal Collection, InfoSci-Select. Recommend this product",
"title": ""
},
{
"docid": "a4a6501af9edda1f7ede81d85a0f370b",
"text": "This paper discusses the development of new winding configuration for six-phase permanent-magnet (PM) machines with 18 slots and 8 poles, which eliminates and/or reduces undesirable space harmonics in the stator magnetomotive force. The proposed configuration improves power/torque density and efficiency with a reduction in eddy-current losses in the rotor permanent magnets and copper losses in end windings. To improve drive train availability for applications in electric vehicles (EVs), this paper proposes the design of a six-phase PM machine as two independent three-phase windings. A number of possible phase shifts between two sets of three-phase windings due to their slot-pole combination and winding configuration are investigated, and the optimum phase shift is selected by analyzing the harmonic distributions and their effect on machine performance, including the rotor eddy-current losses. The machine design is optimized for a given set of specifications for EVs, under electrical, thermal and volumetric constraints, and demonstrated by the experimental measurements on a prototype machine.",
"title": ""
},
{
"docid": "6ff6f2b0c7e7308ec6d7acdf4c3e5a47",
"text": "Event-related brain potentials (ERPs) were recorded from participants listening to or reading sentences that were correct, contained a violation of the required syntactic category, or contained a syntactic-category ambiguity. When sentences were presented auditorily (Experiment 1), there was an early left anterior negativity for syntactic-category violations, but not for syntactic-category ambiguities. Both anomaly types elicited a late centroparietally distributed positivity. When sentences were presented visually word by word (Experiment 2), again an early left anterior negativity was found only for syntactic-category violations, and both types of anomalies elicited a late positivity. The combined data are taken to be consistent with a 2-stage model of parsing, including a 1st stage, during which an initial phrase structure is built and a 2nd stage, during which thematic role assignment and, if necessary, reanalysis takes place. Disruptions to the 1st stage of syntactic parsing appear to be correlated with an early left anterior negativity, whereas disruptions to the 2nd stage might be correlated with a late posterior distributed positivity.",
"title": ""
},
{
"docid": "dfe1b0e97b838655d985d976977674b8",
"text": "This paper studies 24-GHz automotive radar technology for detecting low-friction spots caused by water, ice, or snow on asphalt. The backscattering properties of asphalt in different conditions are studied in both laboratory and field experiments. In addition, the effect of water on the backscattering properties of asphalt is studied with a surface scattering model. The results suggest that low-friction spots could be detected with a radar by comparing backscattered signals at different polarizations. The requirements for the radar are considered, and a 24-GHz radar for road-condition recognition is found to be feasible.",
"title": ""
},
{
"docid": "d3142d58a777bd86c460733011d27d3b",
"text": "Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar.",
"title": ""
},
{
"docid": "12cd45e8832650d620695d4f5680148f",
"text": "OBJECTIVE\nCurrent systems to evaluate outcomes from tissue-engineered cartilage (TEC) are sub-optimal. The main purpose of our study was to demonstrate the use of second harmonic generation (SHG) microscopy as a novel quantitative approach to assess collagen deposition in laboratory made cartilage constructs.\n\n\nMETHODS\nScaffold-free cartilage constructs were obtained by condensation of in vitro expanded Hoffa's fat pad derived stromal cells (HFPSCs), incubated in the presence or absence of chondrogenic growth factors (GF) during a period of 21 d. Cartilage-like features in constructs were assessed by Alcian blue staining, transmission electron microscopy (TEM), SHG and two-photon excited fluorescence microscopy. A new scoring system, using second harmonic generation microscopy (SHGM) index for collagen density and distribution, was adapted to the existing \"Bern score\" in order to evaluate in vitro TEC.\n\n\nRESULTS\nSpheroids with GF gave a relative high Bern score value due to appropriate cell morphology, cell density, tissue-like features and proteoglycan content, whereas spheroids without GF did not. However, both TEM and SHGM revealed striking differences between the collagen framework in the spheroids and native cartilage. Spheroids required a four-fold increase in laser power to visualize the collagen matrix by SHGM compared to native cartilage. Additionally, collagen distribution, determined as the area of tissue generating SHG signal, was higher in spheroids with GF than without GF, but lower than in native cartilage.\n\n\nCONCLUSION\nSHG represents a reliable quantitative approach to assess collagen deposition in laboratory engineered cartilage, and may be applied to improve currently established scoring systems.",
"title": ""
},
{
"docid": "f90a52c7e3b081b944b2943b5a03bfda",
"text": "We investigate video classification via a two-stream convolutional neural network (CNN) design that directly ingests information extracted from compressed video bitstreams. Our approach begins with the observation that all modern video codecs divide the input frames into macroblocks (MBs). We demonstrate that selective access to MB motion vector (MV) information within compressed video bitstreams can also provide for selective, motion-adaptive, MB pixel decoding (a.k.a., MB texture decoding). This in turn allows for the derivation of spatio-temporal video activity regions at extremely high speed in comparison to conventional full-frame decoding followed by optical flow estimation. In order to evaluate the accuracy of a video classification framework based on such activity data, we independently train two CNN architectures on MB texture and MV correspondences and then fuse their scores to derive the final classification of each test video. Evaluation on two standard data sets shows that the proposed approach is competitive with the best two-stream video classification approaches found in the literature. At the same time: 1) a CPU-based realization of our MV extraction is over 977 times faster than GPU-based optical flow methods; 2) selective decoding is up to 12 times faster than full-frame decoding; and 3) our proposed spatial and temporal CNNs perform inference at 5 to 49 times lower cloud computing cost than the fastest methods from the literature.",
"title": ""
},
{
"docid": "01cb086e62adfeafe20a9fdc91554739",
"text": "Older adults are becoming an important market segment for all internet-based services, but few studies to date have considered older adults as online shoppers and users of entertainment media. Utilising the concept of life course, this article investigates the use of mobile technologies for online shopping and entertainment among consumers aged 55 to 74. The data were collected with a web-based survey completed by a panel of respondents representing Finnish television viewers (N=322). The results reveal that consumers aged 55 to 74 use a smartphone or tablet to purchase products or services online as often as younger consumers. In contrast, listening to internet radio and watching videos or programmes online with a smartphone or tablet are most typical for younger male consumers. The results demonstrate that mobile-based online shopping is best predicted by age, higher education, and household type (children living at home), and use of entertainment media by age and gender.",
"title": ""
},
{
"docid": "4a97f2f6bcd9ea1c1cd1bd925529fa4f",
"text": "OBJECTIVE\nArousal (AR) from sleep is associated with an autonomic reflex activation raising blood pressure and heart rate (HR). Recent studies indicate that sleep deprivation may affect the autonomic system, contributing to high vascular risk. Since in sleep disorders a sleep fragmentation and a partial sleep deprivation occurs, it could be suggested that the cardiovascular effects observed at AR from sleep might be physiologically affected when associated with sleep deprivation. The aim of the study was to examine the effect of sleep deprivation on cardiac arousal response in healthy subjects.\n\n\nMETHODS\nSeven healthy male subjects participated in a 64 h sleep deprivation protocol. Arousals were classified into four groups, i.e. >3<6 s, >6<10 s, >10<15 s and >15 s, according to their duration. Pre-AR HR values were measured during 10 beats preceding the AR onset, and the event-related HR fluctuations were calculated during the 20 beats following AR onset. As an index of cardiac activation, the ratio of highest HR in the post-AR period over the lowest recorded before AR (HR ratio) was calculated.\n\n\nRESULTS\nFor AR lasting less than 10 s, the occurrence of AR induces typical HR oscillations in a bimodal pattern, tachycardia followed by bradycardia. For AR lasting more than 10 s, i.e. awakenings, the pattern was unimodal with a more marked and sustained HR rise. The HR response was consistently similar across nights, during NREM and REM sleep, without difference between conditions.\n\n\nCONCLUSIONS\nOverall, total sleep deprivation appeared to have no substantial effect on cardiac response to spontaneous arousals and awakenings from sleep in healthy subjects. Further studies are needed to clarify the role of chronic sleep deprivation on cardiovascular risk in patients with sleep disorders.\n\n\nSIGNIFICANCE\nIn healthy subjects acute prolonged sleep deprivation does not affect the cardiac response to arousal.",
"title": ""
},
{
"docid": "0a1a1d5aafa092d6503a50a0a1adc75b",
"text": "In this paper, firstly, we introduce the Ql-Generating matrix for the bi-periodic Lucas numbers. Then, by taking into account this matrix representation, we obtain some properties for the bi-periodic Fibonacci and Lucas numbers.",
"title": ""
},
{
"docid": "7b65240df2cfd987be46933baed8e412",
"text": "Security has become a primary concern in order to provide protected communication between mobile nodes in a hostile environment. Unlike the wireline networks, the unique characteristics of mobile ad hoc networks pose a number of nontrivial challenges to security design, such as open peer-to-peer network architecture, shared wireless medium, stringent resource constraints, and highly dynamic network topology. These challenges clearly make a case for building multifence security solutions that achieve both broad protection and desirable network performance. In this article we focus on the fundamental security problem of protecting the multihop network connectivity between mobile nodes in a MANET. We identify the security issues related to this problem, discuss the challenges to security design, and review the state-of-the-art security proposals that protect the MANET link- and network-layer operations of delivering packets over the multihop wireless channel. The complete security solution should span both layers, and encompass all three security components of prevention, detection, and reaction.",
"title": ""
},
{
"docid": "c3c0e14aa82b438ceb92a84bcdbed184",
"text": "Advances in technology for miniature electronic military equipment and systems have led to the emergence of unmanned aerial vehicles (UAVs) as the new weapons of war and tools used in various other areas. UAVs can easily be controlled from a remote location. They are being used for critical operations, including offensive, reconnaissance, surveillance and other civilian missions. The need to secure these channels in a UAV system is one of the most important aspects of the security of this system because all information critical to the mission is sent through wireless communication channels. It is well understood that loss of control over these systems to adversaries due to lack of security is a potential threat to national security. In this paper various security threats to a UAV system is analyzed and a cyber-security threat model showing possible attack paths has been proposed. This model will help designers and users of the UAV systems to understand the threat profile of the system so as to allow them to address various system vulnerabilities, identify high priority threats, and select mitigation techniques for these threats.",
"title": ""
}
] |
scidocsrr
|
6fcad9c0cd061051c2b8c1c991718e40
|
One-shot learning of generative speech concepts
|
[
{
"docid": "65405e7f9b510f3a15d826e9969426f2",
"text": "Human concept learning is particularly impressive in two respects: the internal structure of concepts can be representationally rich, and yet the very same concepts can also be learned from just a few examples. Several decades of research have dramatically advanced our understanding of these two aspects of concepts. While the richness and speed of concept learning are most often studied in isolation, the power of human concepts may be best explained through their synthesis. This paper presents a large-scale empirical study of one-shot concept learning, suggesting that rich generative knowledge in the form of a motor program can be induced from just a single example of a novel concept. Participants were asked to draw novel handwritten characters given a reference form, and we recorded the motor data used for production. Multiple drawers of the same character not only produced visually similar drawings, but they also showed a striking correspondence in their strokes, as measured by their number, shape, order, and direction. This suggests that participants can infer a rich motorbased concept from a single example. We also show that the motor programs induced by individual subjects provide a powerful basis for one-shot classification, yielding far higher accuracy than state-of-the-art pattern recognition methods based on just the visual form.",
"title": ""
},
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
},
{
"docid": "18969bed489bb9fa7196634a8086449e",
"text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.",
"title": ""
}
] |
[
{
"docid": "7b215780b323aa3672d34ca243b1cf46",
"text": "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parametrization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested SyncSpecCNN on various tasks, including 3D shape part segmentation and keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.",
"title": ""
},
{
"docid": "7c87ec9ac7e5170e0ddaccadf992ea3f",
"text": "Social computational systems emerge in the wild on popular social networking sites like Facebook and Twitter, but there remains confusion about the relationship between social interactions and the technical traces of interaction left behind through use. Twitter interactions and social experience are particularly challenging to make sense of because of the wide range of tools used to access Twitter (text message, website, iPhone, TweetDeck and others), and the emergent set of practices for annotating message context (hashtags, reply to's and direct messaging). Further, Twitter is used as a back channel of communication in a wide range of contexts, ranging from disaster relief to watching television. Our study examines Twitter as a transport protocol that is used differently in different socio-technical contexts, and presents an analysis of how researchers might begin to approach studies of Twitter interactions with a more reflexive stance toward the application programming interfaces (APIs) Twitter provides. We conduct a careful review of existing literature examining socio-technical phenomena on Twitter, revealing a collective inconsistency in the description of data gathering and analysis methods. In this paper, we present a candidate architecture and methodological approach for examining specific parts of the Twittersphere. Our contribution begins a discussion among social media researchers on the topic of how to systematically and consistently make sense of the social phenomena that emerge through Twitter. This work supports the comparative analysis of Twitter studies and the development of social media theories.",
"title": ""
},
{
"docid": "39430478909e5818b242e0b28db419f0",
"text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.",
"title": ""
},
{
"docid": "29c4156e966f2e177a71d604b1883204",
"text": "This paper discusses the use of factorization techniques in distributional semantic models. We focus on a method for redistributing the weight of latent variables, which has previously been shown to improve the performance of distributional semantic models. However, this result has not been replicated and remains poorly understood. We refine the method, and provide additional theoretical justification, as well as empirical results that demonstrate the viability of the proposed approach.",
"title": ""
},
{
"docid": "5bd2ca6168ffd48c17c1178452a230bc",
"text": "Functional imaging studies have examined which brain regions respond to emotional stimuli, but they have not determined how stable personality traits moderate such brain activation. Two personality traits, extraversion and neuroticism, are strongly associated with emotional experience and may thus moderate brain reactivity to emotional stimuli. The present study used functional magnetic resonance imaging to directly test whether individual differences in brain reactivity to emotional stimuli are correlated with extraversion and neuroticism in healthy women. Extraversion was correlated with brain reactivity to positive stimuli in localized brain regions, and neuroticism was correlated with brain reactivity to negative stimuli in localized brain regions. This study provides direct evidence that personality is associated with brain reactivity to emotional stimuli and identifies both common and distinct brain regions where such modulation takes place.",
"title": ""
},
{
"docid": "16118317af9ae39ee95765616c5506ed",
"text": "Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.",
"title": ""
},
{
"docid": "0728609462de3c7bb678c61ee25aa51a",
"text": "Current GPUs are massively parallel multicore processors optimised for workloads with a large degree of SIMD parallelism. Good performance requires highly idiomatic programs, whose development is work intensive and requires expert knowledge.\n To raise the level of abstraction, we propose a domain-specific high-level language of array computations that captures appropriate idioms in the form of collective array operations. We embed this purely functional array language in Haskell with an online code generator for NVIDIA's CUDA GPGPU programming environment. We regard the embedded language's collective array operations as algorithmic skeletons; our code generator instantiates CUDA implementations of those skeletons to execute embedded array programs.\n This paper outlines our embedding in Haskell, details the design and implementation of the dynamic code generator, and reports on initial benchmark results. These results suggest that we can compete with moderately optimised native CUDA code, while enabling much simpler source programs.",
"title": ""
},
{
"docid": "ae9de9ddc0a81a3607a1cb8ceb25280c",
"text": "The major chip manufacturers have all introduced chip multiprocessing (CMP) and simultaneous multithreading (SMT) technology into their processing units. As a result, even low-end computing systems and game consoles have become shared memory multiprocessors with L1 and L2 cache sharing within a chip. Mid- and large-scale systems will have multiple processing chips and hence consist of an SMP-CMP-SMT configuration with non-uniform data sharing overheads. Current operating system schedulers are not aware of these new cache organizations, and as a result, distribute threads across processors in a way that causes many unnecessary, long-latency cross-chip cache accesses.\n In this paper we describe the design and implementation of a scheme to schedule threads based on sharing patterns detected online using features of standard performance monitoring units (PMUs) available in today's processing units. The primary advantage of using the PMU infrastructure is that it is fine-grained (down to the cache line) and has relatively low overhead. We have implemented our scheme in Linux running on an 8-way Power5 SMP-CMP-SMT multi-processor. For commercial multithreaded server workloads (VolanoMark, SPECjbb, and RUBiS), we are able to demonstrate reductions in cross-chip cache accesses of up to 70%. These reductions lead to application-reported performance improvements of up to 7%.",
"title": ""
},
{
"docid": "c9b82acf253373d8fda3958fd2f2e508",
"text": "Network based communication is more vulnerable to outsider and insider attacks in recent days due to its wide spread applications in many fields. Intrusion Detection System (IDS) a software application or a hardware is a security mechanism that is able to monitor network traffic and find abnormal activities in the network. Machine learning techniques which have an important role in detecting the attacks were mostly used in the development of IDS. Due to huge increase in network traffic and different types of attacks, monitoring each and every packet in the network traffic is time consuming and computational intensive. Deep learning acts as a powerful tool by which thorough packet inspection and attack identification is possible. The parallel computing capabilities of the neural network make the Deep Neural Network (DNN) to effectively look through the network traffic with an accelerated performance. In this paper an accelerated DNN architecture is developed to identify the abnormalities in the network data. NSL-KDD dataset is used to compute the training time and to analyze the effectiveness of the detection mechanism.",
"title": ""
},
{
"docid": "5ea65d6e878d2d6853237a74dbc5a894",
"text": "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called “Cache-Sensitive Search Trees” (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array offsets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with ∗This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. a small space overhead, we can reduce the cost of binary search on the array by more than a factor of two. We also show that our technique dominates B+-trees, T-trees, and binary search trees in terms of both space and time. A cache simulation verifies that the gap is due largely to cache misses.",
"title": ""
},
{
"docid": "800fd3b3b6dfd21838006e643ba92a0d",
"text": "The primary goals in use of half-bridge LLC series-resonant converter (LLC-SRC) are high efficiency, low noise, and wide-range regulation. A voltage-clamped drive circuit for simultaneously driving both primary and secondary switches is proposed to achieve synchronous rectification (SR) at switching frequency higher than the dominant resonant frequency. No high/low-side driver circuit for half-bridge switches of LLC-SRC is required and less circuit complexity is achieved. The SR mode LLC-SRC developed for reducing output rectification losses is described along with steady-state analysis, gate drive strategy, and its experiments. Design consideration is described thoroughly so as to build up a reference for design and realization. A design example of 240W SR LLC-SRC is examined and an average efficiency as high as 95% at full load is achieved. All performances verified by simulation and experiment are close to the theoretical predictions.",
"title": ""
},
{
"docid": "9dd245f75092adc8d8bb2b151275789b",
"text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"title": ""
},
{
"docid": "d9c9dde3f5e3bf280f09d6783a573357",
"text": "We present a detection method that is able to detect a learned target and is valid for both static and moving cameras. As an application, we detect pedestrians, but could be anything if there is a large set of images of it. The data set is fed into a number of deep convolutional networks, and then, two of these models are set in cascade in order to filter the cutouts of a multi-resolution window that scans the frames in a video sequence. We demonstrate that the excellent performance of deep convolutional networks is very difficult to match when dealing with real problems, and yet we obtain competitive results.",
"title": ""
},
{
"docid": "cf24e793c307a7a6af53f160012ee926",
"text": "This work presents a single- and dual-port fully integrated millimeter-wave ultra-broadband vector network analyzer. Both circuits, realized in a commercial 0.35-μm SiGe:C technology with an ft/fmax of 170/250 GHz, cover an octave frequency bandwidth between 50-100 GHz. The presented chips can be configured to measure complex scattering parameters of external devices or determine the permittivity of different materials using an integrated millimeter-wave dielectric sensor. Both devices are based on a heterodyne architecture that achieves a receiver dynamic range of 57-72.5 dB over the complete design frequency range. Two integrated frequency synthesizer modules are included in each chip that enable the generation of the required test and local-oscillator millimeter-wave signals. A measurement 3σ statistical phase error lower than 0.3 ° is achieved. Automated measurement of changes in the dielectric properties of different materials is demonstrated using the proposed systems. The single- and dual-port network analyzer chips have a current consumption of 600 and 700 mA, respectively, drawn from a single 3.3-V supply.",
"title": ""
},
{
"docid": "de24242bef4464a0126ce3806b795ac8",
"text": "Music must first be defined and distinguished from speech, and from animal and bird cries. We discuss the stages of hominid anatomy that permit music to be perceived and created, with the likelihood of both Homo neanderthalensis and Homo sapiens both being capable. The earlier hominid ability to emit sounds of variable pitch with some meaning shows that music at its simplest level must have predated speech. The possibilities of anthropoid motor impulse suggest that rhythm may have preceded melody, though full control of rhythm may well not have come any earlier than the perception of music above. There are four evident purposes for music: dance, ritual, entertainment personal, and communal, and above all social cohesion, again on both personal and communal levels. We then proceed to how instruments began, with a brief survey of the surviving examples from the Mousterian period onward, including the possible Neanderthal evidence and the extent to which they showed “artistic” potential in other fields. We warn that our performance on replicas of surviving instruments may bear little or no resemblance to that of the original players. We continue with how later instruments, strings, and skin-drums began and developed into instruments we know in worldwide cultures today. The sound of music is then discussed, scales and intervals, and the lack of any consistency of consonant tonality around the world. This is followed by iconographic evidence of the instruments of later antiquity into the European Middle Ages, and finally, the history of public performance, again from the possibilities of early humanity into more modern times. This paper draws the ethnomusicological perspective on the entire development of music, instruments, and performance, from the times of H. neanderthalensis and H. sapiens into those of modern musical history, and it is written with the deliberate intention of informing readers who are without special education in music, and providing necessary information for inquiries into the origin of music by cognitive scientists.",
"title": ""
},
{
"docid": "0f563146a4b5db032cbe52d04930e066",
"text": "Clustering problems are central to many knowledge discovery and data mining tasks. However, most existing clustering methods can only work with fixed-dimensional representations of data patterns. In this paper, we study the clustering of data patterns that are represented as sequences or time series possibly of different lengths. We propose a model-based approach to this problem using mixtures of autoregressive moving average (ARMA) models. We derive an expectation-maximization (EM) algorithm for learning the mixing coefficients as well as the parameters of the component models. The algorithm can determine the number of clusters in the data automatically. Experiments were conducted on a number of simulated and real datasets. Results from the experiments show that our method compares favorably with another method recently proposed by others for similar time series clustering problems.",
"title": ""
},
{
"docid": "dbbdfbdefccc6691bf71aef213d03b94",
"text": "Cloud environments can be simulated using the toolkit CloudSim. By employing concepts such as physical servers in datacenters, virtual machine allocation policies, or coarse-grained models of deployed software, it focuses on a cloud provider perspective. In contrast, a cloud user who wants to migrate complex systems to the cloud typically strives to find a cloud deployment option that is best suited for its sophisticated system architecture, is interested in determining the best trade-off between costs and performance, or wants to compare runtime reconfiguration plans, for instance. We present significant enhancements of CloudSim that allow to follow this cloud user perspective and enable the frictionless integration of fine-grained application models that, to a great extent, can be derived automatically from software systems. Our quantitative evaluation demonstrates the applicability and accuracy of our approach by comparing its simulation results with actual deployments that utilize the cloud environment Amazon EC2.",
"title": ""
}
] |
scidocsrr
|
784a18cf194fc46cec9f3226040773f6
|
Random Walks for Image Segmentation
|
[
{
"docid": "9edfe5895b369c0bab8d83838661ea0a",
"text": "(57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT",
"title": ""
},
{
"docid": "e2f69fd023cfe69432459e8a82d4c79a",
"text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "ed54e85797da22ed8488cf964371dbd9",
"text": "Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as “so” or “but” to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets.",
"title": ""
},
{
"docid": "8cf8727c31a8bc888a23b82eee1d7dfc",
"text": "Low stiffness elements have a number of applications in Soft Robotics, from Series Elastic Actuators (SEA) to torque sensors for compliant systems.",
"title": ""
},
{
"docid": "1cc586730cf0c1fd57cf6ff7548abe24",
"text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.",
"title": ""
},
{
"docid": "421f578f0b82f6c32c796c503123265c",
"text": "Research examining the nature and consequences of social exclusion indicates that such behavior is multifaceted and has deleterious effects on the intended targets. However, relatively little research has specifically assessed the impact of such behavior on employees who perceive of themselves as being excluded within their place of work. Even less has examined gender differences in relation to exclusionary behavior. The current research investigated the moderating effect of gender on the relation between perceived exclusion at work and work-related attitudes and psychological health. Participants included 223 working students (64 men and 159 women). Hierarchical moderated regression analyses on work attitudes (supervisor satisfaction, coworker satisfaction) and psychological health supported initial predictions. At higher levels of perceived exclusion men indicated lower satisfaction and psychological health compared to women. Findings are discussed in terms of potential workplace implications and limitations of the current research.",
"title": ""
},
{
"docid": "99ca3beaeece882f9a030dcf43731a3a",
"text": "In recent years, social media have been increasingly adopted in enterprises. Enterprises use social media as an additional way to get in contact with their customers and support internal communication and collaboration. However, little research is devoted to the adoption and internal usage of social media in small and medium-sized enterprises (SMEs), which are of high social and economic importance. The purpose of this paper is to examine the adoption, usage, and benefits of social media in SMEs as well as potential concerns that may prevent a wider adoption of social media in SMEs. Therefore, a survey of decision-makers in German SMEs was conducted. Findings based on 190 responses indicate that SMEs started to use internal social media (e.g., wikis, blogs) in order to support collaboration among employees and to improve knowledge management. However, SMEs still face problems to manage adoption and to identify relevant business values. Based on our results, we derive several implications for SMEs, in particular how to overcome the obstacles to a wider adoption of social media.",
"title": ""
},
{
"docid": "d21e4e55966bac19bbed84b23360b66d",
"text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.",
"title": ""
},
{
"docid": "9d700ef057eb090336d761ebe7f6acb0",
"text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods",
"title": ""
},
{
"docid": "a138a545a3de355757928b58ba430f5d",
"text": "Learning analytics is a research topic that is gaining increasing popularity in recent time. It analyzes the learning data available in order to make aware or improvise the process itself and/or the outcome such as student performance. In this survey paper, we look at the recent research work that has been conducted around learning analytics, framework and integrated models, and application of various models and data mining techniques to identify students at risk and to predict student performance. Keywords— Learning Analytics, Student Performance, Student Retention, Academic analytics, Course success.",
"title": ""
},
{
"docid": "42c0f8504f26d46a4cc92d3c19eb900d",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "bfe89b9e50e09b2b70450d540a7931e1",
"text": "Social networking websites allow users to create and share content. Big information cascades of post resharing can form as users of these sites reshare others' posts with their friends and followers. One of the central challenges in understanding such cascading behaviors is in forecasting information outbreaks, where a single post becomes widely popular by being reshared by many users. In this paper, we focus on predicting the final number of reshares of a given post. We build on the theory of self-exciting point processes to develop a statistical model that allows us to make accurate predictions. Our model requires no training or expensive feature engineering. It results in a simple and efficiently computable formula that allows us to answer questions, in real-time, such as: Given a post's resharing history so far, what is our current estimate of its final number of reshares? Is the post resharing cascade past the initial stage of explosive growth? And, which posts will be the most reshared in the future?\n We validate our model using one month of complete Twitter data and demonstrate a strong improvement in predictive accuracy over existing approaches. Our model gives only 15% relative error in predicting final size of an average information cascade after observing it for just one hour.",
"title": ""
},
{
"docid": "9aef54f4d2f3b9b20cde7ae209a102d8",
"text": "Relational learning exploits relationships among instances manifested in a network to improve the predictive performance of many network mining tasks. Due to its empirical success, it has been widely applied in myriad domains. In many cases, individuals in a network are highly idiosyncratic. They not only connect to each other with a composite of factors but also are often described by some content information of high dimensionality specific to each individual. For example in social media, as user interests are quite diverse and personal; posts by different users could differ significantly. Moreover, social content of users is often of high dimensionality which may negatively degrade the learning performance. Therefore, it would be more appealing to tailor the prediction for each individual while alleviating the issue related to the curse of dimensionality. In this paper, we study a novel problem of Personalized Relational Learning and propose a principled framework PRL to personalize the prediction for each individual in a network. Specifically, we perform personalized feature selection and employ a small subset of discriminative features customized for each individual and some common features shared by all to build a predictive model. On this account, the proposed personalized model is more human interpretable. Experiments on realworld datasets show the superiority of the proposed PRL framework over traditional relational learning methods.",
"title": ""
},
{
"docid": "6961b34ae6e5043be5f777dbd7818ebf",
"text": "Sign language is the communication medium for the deaf and the mute people. It uses hand gestures along with the facial expressions and the body language to convey the intended message. This paper proposes a novel approach of interpreting the sign language using the portable smart glove. LED-LDR pair on each finger senses the signing gesture and couples the analog voltage to the microcontroller. The microcontroller MSP430G2553 converts these analog voltage values to digital samples and the ASCII code of the letter gestured is wirelessly transmitted using the ZigBee. Upon reception, the letter corresponding to the received ASCII code is displayed on the computer and the corresponding audio is played.",
"title": ""
},
{
"docid": "6c2ac0d096c1bcaac7fd70bd36a5c056",
"text": "The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders.",
"title": ""
},
{
"docid": "163dbb128f1205f5e31bb3db5c0c17c8",
"text": "This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting online-purchasing behaviour: (1) general clickstream behaviour at the level of the visit, (2) more detailed clickstream information, (3) customer demographics, and (4) historical purchase behaviour. The results show that predictors from all four categories are retained in the final (best subset) solution indicating that clickstream behaviour is important when determining the tendency to buy. We clearly indicate the contribution in predictive power of variables that were never used before in online purchasing studies. Detailed clickstream variables are the most important ones in classifying customers according to their online purchase behaviour. In doing so, we are able to highlight the advantage of e-commerce retailers of being able to capture an elaborate list of customer information.",
"title": ""
},
{
"docid": "8d13a4f52c9a72a2f53b6633f7fb4053",
"text": "The hippocampal-entorhinal system encodes a map of space that guides spatial navigation. Goal-directed behaviour outside of spatial navigation similarly requires a representation of abstract forms of relational knowledge. This information relies on the same neural system, but it is not known whether the organisational principles governing continuous maps may extend to the implicit encoding of discrete, non-spatial graphs. Here, we show that the human hippocampal-entorhinal system can represent relationships between objects using a metric that depends on associative strength. We reconstruct a map-like knowledge structure directly from a hippocampal-entorhinal functional magnetic resonance imaging adaptation signal in a situation where relationships are non-spatial rather than spatial, discrete rather than continuous, and unavailable to conscious awareness. Notably, the measure that best predicted a behavioural signature of implicit knowledge and blood oxygen level-dependent adaptation was a weighted sum of future states, akin to the successor representation that has been proposed to account for place and grid-cell firing patterns.",
"title": ""
},
{
"docid": "eba80f219c9be3690f15a2c6eb6c52ce",
"text": "While the incipient internet was largely text-based, the modern digital world is becoming increasingly multi-modal. Here, we examine multi-modal classification where one modality is discrete, e.g. text, and the other is continuous, e.g. visual representations transferred from a convolutional neural network. In particular, we focus on scenarios where we have to be able to classify large quantities of data quickly. We investigate various methods for performing multi-modal fusion and analyze their trade-offs in terms of classification accuracy and computational efficiency. Our findings indicate that the inclusion of continuous information improves performance over text-only on a range of multi-modal classification tasks, even with simple fusion methods. In addition, we experiment with discretizing the continuous features in order to speed up and simplify the fusion process even further. Our results show that fusion with discretized features outperforms text-only classification, at a fraction of the computational cost of full multimodal fusion, with the additional benefit of improved interpretability. Text classification is one of the core problems in machine learning and natural language processing (Borko and Bernick 1963; Sebastiani 2002). It plays a crucial role in important tasks ranging from document retrieval and categorization to sentiment and topic classification (Deerwester et al. 1990; Joachims 1998; Pang and Lee 2008). However, while the incipient Web was largely text-based, the recent decade has seen a surge in multi-modal content: billions of images and videos are posted and shared online every single day. That is, text is either replaced as the dominant modality, as is the case with Instagram posts or YouTube videos, or it is augmented with non-textual content, as with most of today’s web pages. This makes multi-modal classification an important problem. Here, we examine the task of multi-modal classification using neural networks. We are primarily interested in two questions: what is the best way to combine (i.e., fuse) data from different modalities, and how can we do so in the most efficient manner? We examine various efficient multi-modal fusion methods and investigate ways to speed up the fusion process. In particular, we explore discretizing the continuous features, which leads to much faster training and requires Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. less storage, yet is still able to benefit from the inclusion of multi-modal information. To the best of our knowledge, this work constitutes the first attempt to examine the accuracy/speed trade-off in multi-modal classification; and the first to directly show the value of discretized features in this particular task. If current trends continue, the Web will become increasingly multi-modal, making the question of multi-modal classification ever more pertinent. At the same time, as the Web keeps growing, we have to be able to efficiently handle ever larger quantities of data, making it important to focus on machine learning methods that can be applied to large-scale scenarios. This work aims to examine these two questions together. Our contributions are as follows. First, we compare various multi-modal fusion methods, examine their trade-offs, and show that simpler models are often desirable. Second, we experiment with discretizing continuous features in order to speed up and simplify the fusion process even further. Third, we examine learned representations for discretized features and show that they yield interpretability as a beneficial side effect. The work reported here constitutes a solid and scalable baseline for other approaches to follow; our investigation of discretized features shows how multi-modal classification does not necessarily imply a large performance penalty and is feasible in large-scale scenarios.",
"title": ""
},
{
"docid": "7b6775a595cf843eac0b30ad850f8c32",
"text": "The main objectives of the study were: to investigate whether training on working memory (WM) could improve fluid intelligence, and to investigate the effects WM training had on neuroelectric (electroencephalography - EEG) and hemodynamic (near-infrared spectroscopy - NIRS) patterns of brain activity. In a parallel group experimental design, respondents of the working memory group after 30 h of training significantly increased performance on all tests of fluid intelligence. By contrast, respondents of the active control group (participating in a 30-h communication training course) showed no improvements in performance. The influence of WM training on patterns of neuroelectric brain activity was most pronounced in the theta and alpha bands. Theta and lower-1 alpha band synchronization was accompanied by increased lower-2 and upper alpha desynchronization. The hemodynamic patterns of brain activity after the training changed from higher right hemispheric activation to a balanced activity of both frontal areas. The neuroelectric as well as hemodynamic patterns of brain activity suggest that the training influenced WM maintenance functions as well as processes directed by the central executive. The changes in upper alpha band desynchronization could further indicate that processes related to long term memory were also influenced.",
"title": ""
},
{
"docid": "30007880e4759e76831b8b714456df73",
"text": "Graph classification is becoming increasingly popular due to the rapidly rising applications involving data with structural dependency. The wide spread of the graph applications and the inherent complex relationships between graph objects have made the labels of the graph data expensive and/or difficult to obtain, especially for applications involving dynamic changing graph records. While labeled graphs are limited, the copious amounts of unlabeled graphs are often easy to obtain with trivial efforts. In this paper, we propose a framework to build a stream based graph classification model by combining both labeled and unlabeled graphs. Our method, called gSLU, employs an ensemble based framework to partition graph streams into a number of graph chunks each containing some labeled and unlabeled graphs. For each individual chunk, we propose a minimum-redundancy subgraph feature selection module to select a set of informative subgraph features to build a classifier. To tackle the concept drifting in graph streams, an instance level weighting mechanism is used to dynamically adjust the instance weight, through which the subgraph feature selection can emphasize on difficult graph samples. The classifiers built from different graph chunks form an ensemble for graph stream classification. Experiments on real-world graph streams demonstrate clear benefits of using minimum-redundancy subgraph features to build accurate classifiers. By employing instance level weighting, our graph ensemble model can effectively adapt to the concept drifting in the graph stream for classification.",
"title": ""
}
] |
scidocsrr
|
b94bd40a8af18cfa8ff7809551de96ba
|
Prediction Reweighting for Domain Adaptation
|
[
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
}
] |
[
{
"docid": "ce305309d82e2d2a3177852c0bb08105",
"text": "BACKGROUND\nEmpathizing is a specific component of social cognition. Empathizing is also specifically impaired in autism spectrum condition (ASC). These are two dimensions, measurable using the Empathy Quotient (EQ) and the Autism Spectrum Quotient (AQ). ASC also involves strong systemizing, a dimension measured using the Systemizing Quotient (SQ). The present study examined the relationship between the EQ, AQ and SQ. The EQ and SQ have been used previously to test for sex differences in 5 'brain types' (Types S, E, B and extremes of Type S or E). Finally, people with ASC have been conceptualized as an extreme of the male brain.\n\n\nMETHOD\nWe revised the SQ to avoid a traditionalist bias, thus producing the SQ-Revised (SQ-R). AQ and EQ were not modified. All 3 were administered online.\n\n\nSAMPLE\nStudents (723 males, 1038 females) were compared to a group of adults with ASC group (69 males, 56 females).\n\n\nAIMS\n(1) To report scores from the SQ-R. (2) To test for SQ-R differences among students in the sciences vs. humanities. (3) To test if AQ can be predicted from EQ and SQ-R scores. (4) To test for sex differences on each of these in a typical sample, and for the absence of a sex difference in a sample with ASC if both males and females with ASC are hyper-masculinized. (5) To report percentages of males, females and people with an ASC who show each brain type.\n\n\nRESULTS\nAQ score was successfully predicted from EQ and SQ-R scores. In the typical group, males scored significantly higher than females on the AQ and SQ-R, and lower on the EQ. The ASC group scored higher than sex-matched controls on the SQ-R, and showed no sex differences on any of the 3 measures. More than twice as many typical males as females were Type S, and more than twice as many typical females as males were Type E. The majority of adults with ASC were Extreme Type S, compared to 5% of typical males and 0.9% of typical females. The EQ had a weak negative correlation with the SQ-R.\n\n\nDISCUSSION\nEmpathizing is largely but not completely independent of systemizing. The weak but significant negative correlation may indicate a trade-off between them. ASC involves impaired empathizing alongside intact or superior systemizing. Future work should investigate the biological basis of these dimensions, and the small trade-off between them.",
"title": ""
},
{
"docid": "a7d3d2f52a45cdb378863d4e8d96bc27",
"text": "This paper presents a three-phase single-stage bidirectional isolated matrix based AC-DC converter for energy storage. The matrix (3 × 1) topology directly converts the three-phase line voltages into high-frequency AC voltage which is subsequently, processed using a high-frequency transformer followed by a controlled rectifier. A modified Space Vector Modulation (SVM) based switching scheme is proposed to achieve high input power quality with high power conversion efficiency. Compared to the conventional two stage converter, the proposed converter provides single-stage conversion resulting in higher power conversion efficiency and higher power density. The operating principles of the proposed converter in both AC-DC and DC-AC mode are explained followed by steady state analysis. Simulation results are presented for 230 V, 50 Hz to 48 V isolated bidirectional converter at 2 kW output power to validate the theoretical claims.",
"title": ""
},
{
"docid": "f90967525247030b9da04fc4c37b6c14",
"text": "Vehicle tracking using airborne wide-area motion imagery (WAMI) for monitoring urban environments is very challenging for current state-of-the-art tracking algorithms, compared to object tracking in full motion video (FMV). Characteristics that constrain performance in WAMI to relatively short tracks range from the limitations of the camera sensor array including low frame rate and georegistration inaccuracies, to small target support size, presence of numerous shadows and occlusions from buildings, continuously changing vantage point of the platform, presence of distractors and clutter among other confounding factors. We describe our Likelihood of Features Tracking (LoFT) system that is based on fusing multiple sources of information about the target and its environment akin to a track-before-detect approach. LoFT uses image-based feature likelihood maps derived from a template-based target model, object and motion saliency, track prediction and management, combined with a novel adaptive appearance target update model. Quantitative measures of performance are presented using a set of manually marked objects in both WAMI, namely Columbus Large Image Format (CLIF), and several standard FMV sequences. Comparison with a number of single object tracking systems shows that LoFT outperforms other visual trackers, including state-of-the-art sparse representation and learning based methods, by a significant amount on the CLIF sequences and is competitive on FMV sequences.",
"title": ""
},
{
"docid": "96a96b056a1c49d09d1ef6873eb80c6f",
"text": "Raman and Grossmann [Raman, R., & Grossmann, I.E. (1994). Modeling and computational techniques for logic based integer programming. Computers and Chemical Engineering, 18(7), 563–578] and Lee and Grossmann [Lee, S., & Grossmann, I.E. (2000). New algorithms for nonlinear generalized disjunctive programming. Computers and Chemical Engineering, 24, 2125–2141] have developed a reformulation of Generalized Disjunctive Programming (GDP) problems that is based on determining the convex hull of each disjunction. Although the with the quires n order to hod relies m an LP else until ng, retrofit utting plane",
"title": ""
},
{
"docid": "6b31dd39704f52f59f360b2608ce137e",
"text": "A quantitative analysis of a large collection of expert-rated web sites reveals that page-level metrics can accurately predict if a site will be highly rated. The analysis also provides empirical evidence that important metrics, including page composition, page formatting, and overall page characteristics, differ among web site categories such as education, community, living, and finance. These results provide an empirical foundation for web site design guidelines and also suggest which metrics can be most important for evaluation via user studies.",
"title": ""
},
{
"docid": "a7373d69f5ff9d894a630cc240350818",
"text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.",
"title": ""
},
{
"docid": "3f4d83525145a963c87167e3e02136a6",
"text": "Using the GTZAN Genre Collection [1], we start with a set of 1000 30 second song excerpts subdivided into 10 pre-classified genres: Blues, Classical, Country, Disco, Hip-Hop, Jazz, Metal, Pop, Reggae, and Rock. We downsampled to 4000 Hz, and further split each excerpt into 5-second clips For each clip, we compute a spectrogram using Fast Fourier Transforms, giving us 22 timestep vectors of dimensionality 513 for each clip. Spectrograms separate out component audio signals at different frequencies from a raw audio signal, and provide us with a tractable, loosely structured feature set for any given audio clip that is well-suited for deep learning techniques. (See, for example, the spectrogram produced by a jazz excerpt below) Models",
"title": ""
},
{
"docid": "8a243d17a61f75ef9a881af120014963",
"text": "This paper presents a Deep Mayo Predictor model for predicting the outcomes of the matches in IPL 9 being played in April – May, 2016. The model has three components which are based on multifarious considerations emerging out of a deeper analysis of T20 cricket. The models are created using Data Analytics methods from machine learning domain. The prediction accuracy obtained is high as the Mayo Predictor Model is able to correctly predict the outcomes of 39 matches out of the 56 matches played in the league stage of the IPL IX tournament. Further improvement in the model can be attempted by using a larger training data set than the one that has been utilized in this work. No such effort at creating predictor models for cricket matches has been reported in the literature.",
"title": ""
},
{
"docid": "49b0cf976357d0c943ff003526ffff1f",
"text": "Transcranial direct current stimulation (tDCS) is a promising tool for neurocognitive enhancement. Several studies have shown that just a single session of tDCS over the left dorsolateral pFC (lDLPFC) can improve the core cognitive function of working memory (WM) in healthy adults. Yet, recent studies combining multiple sessions of anodal tDCS over lDLPFC with verbal WM training did not observe additional benefits of tDCS in subsequent stimulation sessions nor transfer of benefits to novel WM tasks posttraining. Using an enhanced stimulation protocol as well as a design that included a baseline measure each day, the current study aimed to further investigate the effects of multiple sessions of tDCS on WM. Specifically, we investigated the effects of three subsequent days of stimulation with anodal (20 min, 1 mA) versus sham tDCS (1 min, 1 mA) over lDLPFC (with a right supraorbital reference) paired with a challenging verbal WM task. WM performance was measured with a verbal WM updating task (the letter n-back) in the stimulation sessions and several WM transfer tasks (different letter set n-back, spatial n-back, operation span) before and 2 days after stimulation. Anodal tDCS over lDLPFC enhanced WM performance in the first stimulation session, an effect that remained visible 24 hr later. However, no further gains of anodal tDCS were observed in the second and third stimulation sessions, nor did benefits transfer to other WM tasks at the group level. Yet, interestingly, post hoc individual difference analyses revealed that in the anodal stimulation group the extent of change in WM performance on the first day of stimulation predicted pre to post changes on both the verbal and the spatial transfer task. Notably, this relationship was not observed in the sham group. Performance of two individuals worsened during anodal stimulation and on the transfer tasks. Together, these findings suggest that repeated anodal tDCS over lDLPFC combined with a challenging WM task may be an effective method to enhance domain-independent WM functioning in some individuals, but not others, or can even impair WM. They thus call for a thorough investigation into individual differences in tDCS respondence as well as further research into the design of multisession tDCS protocols that may be optimal for boosting cognition across a wide range of individuals.",
"title": ""
},
{
"docid": "237437eae6a6154fb3b32c4c6c1fed07",
"text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "717489cca3ec371bfcc66efd1e6c1185",
"text": "This paper presents an algorithm for calibrating strapdown magnetometers in the magnetic field domain. In contrast to the traditional method of compass swinging, which computes a series of heading correction parameters and, thus, is limited to use with two-axis systems, this algorithm estimates magnetometer output errors directly. Therefore, this new algorithm can be used to calibrate a full three-axis magnetometer triad. The calibration algorithm uses an iterated, batch least squares estimator which is initialized using a novel two-step nonlinear estimator. The algorithm is simulated to validate convergence characteristics and further validated on experimental data collected using a magnetometer triad. It is shown that the post calibration residuals are small and result in a system with heading errors on the order of 1 to 2 degrees.",
"title": ""
},
{
"docid": "5273e9fea51c85651255de7c253066a0",
"text": "This paper presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive agents.",
"title": ""
},
{
"docid": "db37b12a1e816c15e8719a7048ba3687",
"text": "This study examined the impact of Internet addiction (IA) on life satisfaction and life engagement in young adults. A total of 210 University students participated in the study. Multivariate regression analysis showed that the model was significant and contributes 8% of the variance in life satisfaction (Adjusted R=.080, p<.001) and 2.8% of the variance in life engagement (Adjusted R=.028, p<.05). Unstandardized regression coefficient (B) indicates that one unit increase in raw score of Internet addiction leads to .168 unit decrease in raw score of life satisfaction (B=-.168, p<.001) and .066 unit decrease in raw score of life engagement (B=-.066, p<.05). Means and standard deviations of the scores on IA and its dimensions showed that the most commonly given purposes of Internet are online discussion, adult chatting, online gaming, chatting, cyber affair and watching pornography. Means and standard deviations of the scores on IA and its dimensions across different types of social networking sites further indicate that people who frequently participate in skype, twitter and facebook have relatively higher IA score. Correlations of different aspects of Internet use with major variables indicate significant and positive correlations of Internet use with IA, neglect of duty and virtual fantasies. Implications of the findings for theory, research and practice are discussed.",
"title": ""
},
{
"docid": "bf0531b03cc36a69aca1956b21243dc6",
"text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …",
"title": ""
},
{
"docid": "c3c1d2ec9e60300043070ea93a3c3e1b",
"text": "chology Today, March. Sherif, C. W., Sherif, W., and Nebergall, R. (1965). Attitude and Altitude Change. Philadelphia: W. B. Saunders. Stewart, E. C., and Bennett, M. J. (1991). American Cultural Patterns. Yarmouth, Maine: Intercultural Press. Tai, E. (1986). Modification of the Western Approach to Intercultural Communication for the Japanese Context. Unpublished master's thesis, Portland State University, Portland, Oregon. Thaler, A. (1970). Future Shock. New York: Bantam. Ursin, H. (1978). \"Activation, Coping and Psychosomatics.\" In E. Baade, S. Levine, and H. Ursin (Eds ) Psychobiology of Stress: A Study of Coping Men. New York: Academic Press. A Model of Intercultural Communication Competence",
"title": ""
},
{
"docid": "c0a8acf5741567077c8e7dc188033bc4",
"text": "The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.",
"title": ""
},
{
"docid": "4c8e08daa7310e0a21c234565a033e56",
"text": "Using a cross-panel design and data from 2 successive cohorts of college students (N = 357), we examined the stability of maladaptive perfectionism, procrastination, and psychological distress across 3 time points within a college semester. Each construct was substantially stable over time, with procrastination being especially stable. We also tested, but failed to support, a mediational model with Time 2 (mid-semester) procrastination as a hypothesized mechanism through which Time 1 (early-semester) perfectionism would affect Time 3 (end-semester) psychological distress. An alternative model with Time 2 perfectionism as a mediator of the procrastination-distress association also was not supported. Within-time analyses revealed generally consistent strength of effects in the correlations between the 3 constructs over the course of the semester. A significant interaction effect also emerged. Time 1 procrastination had no effect on otherwise high levels of psychological distress at the end of the semester for highly perfectionistic students, but at low levels of Time 1 perfectionism, the most distressed students by the end of the term were those who were more likely to have procrastinated earlier in the semester. Implications of the stability of the constructs and their association over time, as well as the moderating effects of procrastination, are discussed in the context of maladaptive perfectionism and problematic procrastination.",
"title": ""
},
{
"docid": "b84ffcc2c642896f88b261d983d47021",
"text": "Most successful works in simultaneous localization and mapping (SLAM) aim to build a metric map under a probabilistic viewpoint employing Bayesian filtering techniques. This work introduces a new hybrid metric-topological approach, where the aim is to reconstruct the path of the robot in a hybrid continuous-discrete state space which naturally combines metric and topological maps. Our fundamental contributions are: (i) the estimation of the topological path, an improvement similar to that of Rao-Blackwellized particle filters (RBPF) and FastSLAM in the field of metric map building; and (ii) the application of grounded methods to the abstraction of topology (including loop closure) from raw sensor readings. It is remarkable that our approach could be still represented as a Bayesian inference problem, becoming an extension of purely metric SLAM. Besides providing the formal definitions and the basics for our approach, we also describe a practical implementation aimed to real-time operation. Promising experimental results mapping large environments with multiple nested loops (~30.000 m2, ~2Km robot path) validate our work.",
"title": ""
},
{
"docid": "955882547c8d7d455f3d0a6c2bccd2b4",
"text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.",
"title": ""
}
] |
scidocsrr
|
6e90b4d6427dc3df690870f10108794d
|
RFID- based supply chain traceability system
|
[
{
"docid": "259c17740acd554463731d3e1e2912eb",
"text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.",
"title": ""
},
{
"docid": "9c751a7f274827e3d8687ea520c6e9a9",
"text": "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.",
"title": ""
}
] |
[
{
"docid": "dfc2a459de8400f22969477f28178bd5",
"text": "The requirements of three-dimensional (3-D) road objects have increased for various applications, such as geographic information systems and intelligent transportation systems. The use of mobile lidar systems (MLSs) running along road corridors is an effective way to collect accurate road inventories, but MLS feature extraction is challenged by the blind scanning characteristics of lidar systems and the huge amount of data involved; therefore, an automatic process for MLS data is required to improve efficiency of feature extraction. This study developed a coarse-to-fine approach for the extraction of pole-like road objects from MLS data. The major work consists of data preprocessing, coarse-to-fine segmentation, and detection. In data preprocessing, points from different trajectories were reorganized into road parts, and building facades alongside road corridors were removed to reduce their influence. Then, a coarse-to-fine computational framework for the detection of pole-like objects that segments point clouds was proposed. The results show that the pole-like object detection rate for the proposed method was about 90%, and the proposed coarse-to-fine framework was more efficient than the single-scale framework. These results indicate that the proposed method can be used to effectively extract pole-like road objects from MLS data.",
"title": ""
},
{
"docid": "66805d6819e3c4b5f7c71b7a851c7371",
"text": "We consider classification of email messages as to whether or not they contain certain \"email acts\", such as a request or a commitment. We show that exploiting the sequential correlation among email messages in the same thread can improve email-act classification. More specifically, we describe a new text-classification algorithm based on a dependency-network based collective classification method, in which the local classifiers are maximum entropy models based on words and certain relational features. We show that statistically significant improvements over a bag-of-words baseline classifier can be obtained for some, but not all, email-act classes. Performance improvements obtained by collective classification appears to be consistent across many email acts suggested by prior speech-act theory.",
"title": ""
},
{
"docid": "afaed9813ab63d0f5a23648a1e0efadb",
"text": "We proposed novel airway segmentation methods in volumetric chest computed tomography (CT) using 2.5D convolutional neural net (CNN) and 3D CNN. A method with 2.5D CNN segments airways by voxel-by-voxel classification based on patches which are from three adjacent slices in each of the orthogonal directions including axial, sagittal, and coronal slices around each voxel, while 3D CNN segments by 3D patch-based semantic segmentation using modified 3D U-Net. The extra-validation of our proposed method was demonstrated in 20 test datasets of the EXACT’09 challenge. The detected tree length and the false positive rate was 60.1%, 4.56% for 2.5D CNN and 61.6%, 3.15% for 3D CNN. Our fully automated (end-to-end) segmentation method could be applied in radiological practice.",
"title": ""
},
{
"docid": "cb71e8b2bb1eeaad91a2036a9d3828ac",
"text": "This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewiselinear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to nonmanifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed. This work was supported by ARPA contract F19628-93-C-0171 and NSF Young Investigator award CCR-9357763. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.",
"title": ""
},
{
"docid": "2bd8a66a3e3cfafc9b13fd7ec47e86fc",
"text": "Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes.",
"title": ""
},
{
"docid": "f987c0af2814b3f7d75fc33c22530936",
"text": "All I Really Need to Know I Learned in Kindergarten By Robert Fulghum (Fulghum 1988) Share everything. Play fair. Don’t hit people. Put things back where you found them. Clean up your own mess. Don’t take things that aren’t yours. Say you’re sorry when you hurt somebody. Wash your hands before you eat. Flush. Warm cookies and cold milk are good for you. Live a balanced life – learn some and think some and draw and paint and sing and dance and play and work every day some. Take a nap every afternoon. When you go out into the world, watch out for traffic, hold hands and stick together. Be aware of wonder. Introduction Pair programming is a style of programming in which two programmers work side-by-side at one computer, continuously collaborating on the same design, algorithm, code or test. As discussed below, use of this practice has been demonstrated to improve productivity and quality of software products. Additionally, based on a survey(Williams 1999) of pair programmers (hereafter referred to as “the pair programming survey\"), 100% agreed that they had more confidence in their solution when pair programming than when they program alone. Likewise, 96% agreed that they enjoy their job more than when programming alone.",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "724845cb5c9f531e09f2c8c3e6f52fe4",
"text": "Deep learning has given way to a new era of machine learning, apart from computer vision. Convolutional neural networks have been implemented in image classification, segmentation and object detection. Despite recent advancements, we are still in the very early stages and have yet to settle on best practices for network architecture in terms of deep design, small in size and a short training time. In this work, we propose a very deep neural network comprised of 16 Convolutional layers compressed with the Fire Module adapted from the SQUEEZENET model. We also call for the addition of residual connections to help suppress degradation. This model can be implemented on almost every neural network model with fully incorporated residual learning. This proposed model Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT Places365-Standard scene dataset. In our tests, the model performed with accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation accuracy while also enjoying a 23.86% reduction in training time and an 88.4% reduction in size. In our tests, this model was trained from scratch. Keywords— Convolutional Neural Networks; VGG16; Residual learning; Squeeze Neural Networks; Residual-Squeeze-VGG16; Scene Classification; ResSquVGG16.",
"title": ""
},
{
"docid": "bbcd26c47892476092a779869be7040c",
"text": "This article reviews the thyroid system, mainly from a mammalian standpoint. However, the thyroid system is highly conserved among vertebrate species, so the general information on thyroid hormone production and feedback through the hypothalamic-pituitary-thyroid (HPT) axis should be considered for all vertebrates, while species-specific differences are highlighted in the individual articles. This background article begins by outlining the HPT axis with its components and functions. For example, it describes the thyroid gland, its structure and development, how thyroid hormones are synthesized and regulated, the role of iodine in thyroid hormone synthesis, and finally how the thyroid hormones are released from the thyroid gland. It then progresses to detail areas within the thyroid system where disruption could occur or is already known to occur. It describes how thyroid hormone is transported in the serum and into the tissues on a cellular level, and how thyroid hormone is metabolized. There is an in-depth description of the alpha and beta thyroid hormone receptors and their functions, including how they are regulated, and what has been learned from the receptor knockout mouse models. The nongenomic actions of thyroid hormone are also described, such as in glucose uptake, mitochondrial effects, and its role in actin polymerization and vesicular recycling. The article discusses the concept of compensation within the HPT axis and how this fits into the paradigms that exist in thyroid toxicology/endocrinology. There is a section on thyroid hormone and its role in mammalian development: specifically, how it affects brain development when there is disruption to the maternal, the fetal, the newborn (congenital), or the infant thyroid system. Thyroid function during pregnancy is critical to normal development of the fetus, and several spontaneous mutant mouse lines are described that provide research tools to understand the mechanisms of thyroid hormone during mammalian brain development. Overall this article provides a basic understanding of the thyroid system and its components. The complexity of the thyroid system is clearly demonstrated, as are new areas of research on thyroid hormone physiology and thyroid hormone action developing within the field of thyroid endocrinology. This review provides the background necessary to review the current assays and endpoints described in the following articles for rodents, fishes, amphibians, and birds.",
"title": ""
},
{
"docid": "8437f899a40cf54489b8e86870c32616",
"text": "Lifelong machine learning (or lifelong learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Furthermore, we are able to discover new problems in the usage process of the learned knowledge or model. This enables us to learn more and more continually in a self-motivated manner. We can also adapt our previous konwledge to solve unfamilar problems and learn in the process. Lifelong learning aims to achieve these capabilities. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to a new height. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.",
"title": ""
},
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "7c7801d472e3a03986ec4000d9d86ca8",
"text": "The purpose of this study is to examine structural relationships among the capabilities, processes, and performance of knowledge management, and suggest strategic directions for the successful implementation of knowledge management. To serve this purpose, the authors conducted an extensive survey of 68 knowledge management-adopting Korean firms in diverse industries and collected 215 questionnaires. Analyzing hypothesized structural relationships with the data collected, they found that there exists statistically significant relationships among knowledge management capabilities, processes, and performance. The empirical results of this study also support the wellknown strategic hypothesis of the balanced scorecard (BSC). © 2007 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "00de76b9a27182c5551598871326f6b2",
"text": "The development of computational thinking skills through computer programming is a major topic in education, as governments around the world are introducing these skills in the school curriculum. In consequence, educators and students are facing this discipline for the first time. Although there are many technologies that assist teachers and learners in the learning of this competence, there is a lack of tools that support them in the assessment tasks. This paper compares the computational thinking score provided by Dr. Scratch, a free/libre/open source software assessment tool for Scratch, with McCabe's Cyclomatic Complexity and Halstead's metrics, two classic software engineering metrics that are globally recognized as a valid measurement for the complexity of a software system. The findings, which prove positive, significant, moderate to strong correlations between them, could be therefore considered as a validation of the complexity assessment process of Dr. Scratch.",
"title": ""
},
{
"docid": "ffa5ae359807884c2218b92d2db2a584",
"text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.",
"title": ""
},
{
"docid": "9147cc4e2d26cea9c7d90b9e9dfee7a0",
"text": "We investigate the expressiveness of the microfacet model for isotropic bidirectional reflectance distribution functions (BRDFs) measured from real materials by introducing a non-parametric factor model that represents the model’s functional structure but abandons restricted parametric formulations of its factors. We propose a new objective based on compressive weighting that controls rendering error in high-dynamic-range BRDF fits better than previous factorization approaches. We develop a simple numerical procedure to minimize this objective and handle dependencies that arise between microfacet factors. Our method faithfully captures a more comprehensive set of materials than previous state-of-the-art parametric approaches yet remains compact (3.2KB per BRDF). We experimentally validate the benefit of the microfacet model over a naïve orthogonal factorization and show that fidelity for diffuse materials is modestly improved by fitting an unrestricted shadowing/masking factor. We also compare against a recent data-driven factorization approach [Bilgili et al. 2011] and show that our microfacet-based representation improves rendering accuracy for most materials while reducing storage by more than 10 ×.",
"title": ""
},
{
"docid": "d5130b0353dd05e6a0e6e107c9b863e0",
"text": "We study Euler–Poincaré systems (i.e., the Lagrangian analogue of LiePoisson Hamiltonian systems) defined on semidirect product Lie algebras. We first give a derivation of the Euler–Poincaré equations for a parameter dependent Lagrangian by using a variational principle of Lagrange d’Alembert type. Then we derive an abstract Kelvin-Noether theorem for these equations. We also explore their relation with the theory of Lie-Poisson Hamiltonian systems defined on the dual of a semidirect product Lie algebra. The Legendre transformation in such cases is often not invertible; thus, it does not produce a corresponding Euler–Poincaré system on that Lie algebra. We avoid this potential difficulty by developing the theory of Euler–Poincaré systems entirely within the Lagrangian framework. We apply the general theory to a number of known examples, including the heavy top, ideal compressible fluids and MHD. We also use this framework to derive higher dimensional Camassa-Holm equations, which have many potentially interesting analytical properties. These equations are Euler-Poincaré equations for geodesics on diffeomorphism groups (in the sense of the Arnold program) but where the metric is H rather than L. ∗Research partially supported by NSF grant DMS 96–33161. †Research partially supported by NSF Grant DMS-9503273 and DOE contract DE-FG0395ER25245-A000.",
"title": ""
},
{
"docid": "d7dc0dd72295a5c8e49afb4ed3bb763f",
"text": "Many significant sources of error take place in the smart antenna system like mismatching between the supposed steering vectors and the real vectors, insufficient calibration of array antenna, etc. These errors correspond to adding spatially white noise to each element of the array antenna, therefore the performance of the smart antenna falls and the desired output signal is destroyed. This paper presents a performance study of a smart antenna system at different noise levels using five adaptive beamforming algorithms and compares between them. The investigated algorithms are Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Sample Matrix Inversion (SMI), Recursive Least Square (RLS) and Hybrid Least Mean Square / Sample Matrix Inversion (LMS/SMI). MATLAB simulation results are illustrated to investigate the performance of these algorithms.",
"title": ""
},
{
"docid": "d10ec03d91d58dd678c995ec1877c710",
"text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.",
"title": ""
},
{
"docid": "c1af668bdeeda5871e3bc6a602f022e6",
"text": "Within the parallel computing domain, field programmable gate arrays (FPGA) are no longer restricted to their traditional role as substitutes for application-specific integrated circuits-as hardware \"hidden\" from the end user. Several high performance computing vendors offer parallel re configurable computers employing user-programmable FPGAs. These exciting new architectures allow end-users to, in effect, create reconfigurable coprocessors targeting the computationally intensive parts of each problem. The increased capability of contemporary FPGAs coupled with the embarrassingly parallel nature of the Jacobi iterative method make the Jacobi method an ideal candidate for hardware acceleration. This paper introduces a parameterized design for a deeply pipelined, highly parallelized IEEE 64-bit floating-point version of the Jacobi method. A Jacobi circuit is implemented using a Xilinx Virtex-II Pro as the target FPGA device. Implementation statistics and performance estimates are presented.",
"title": ""
},
{
"docid": "3bb6f64769a92fce9fa0b33fd654bc88",
"text": "The passive dynamic walker (PDW) has a remarkable characteristic that it realizes cyclic locomotion without planning the joint trajectories. However, it cannot control the walking behavior because it is dominated by the fixed body dynamics. Observing the human cyclic locomotion emerged by elastic muscles, we add the compliant hip joint on PDW, and we propose a \"phasic dynamics tuner\" that changes the body dynamics by tuning the joint compliance in order to control the walking behavior. The joint compliance is obtained by driving the joint utilizing antagonistic and agonistic McKibben pneumatic actuators. This paper shows that PDW with the compliant joint and the phasic dynamics tuner enhances the walking performance than present PDW with passive free joints. The phasic dynamics tuner can change the walking velocity by tuning the joint compliance. Experimental results show the effectiveness of the joint compliance and the phasic dynamics tuner.",
"title": ""
}
] |
scidocsrr
|
0d6171068e65b0fff9f2939787f193bc
|
Stochastic Direct Reinforcement : Application to Simple Games with Recurrence
|
[
{
"docid": "ad5005bc593b0fbddfe483732b30fe5e",
"text": "Recent multi-agent extensions of Q-Learning require knowledge of other agents’ payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed “Hyper-Q” Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents’ strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.",
"title": ""
}
] |
[
{
"docid": "21daaa29b6ff00af028f3f794b0f04b7",
"text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.",
"title": ""
},
{
"docid": "ddccad7ce01cad45413e0bcc06ba6668",
"text": "This article highlights the thus far unexplained social and professional effects raised by robotization in surgical applications, and further develops an understanding of social acceptance among professional users of robots in the healthcare sector. It presents findings from ethnographic workplace research on human-robot interactions (HRI) in a population of twenty-three professionals. When considering all the findings, the latest da Vinci system equipped with four robotic arms substitutes two table-side surgical assistants, in contrast to the single-arm AESOP robot that only substitutes one surgical assistant. The adoption of robots and the replacement of surgical assistants provide clear evidence that robots are well-accepted among operating surgeons. Because HRI decrease the operating surgeon’s dependence on social assistance and since they replace the work tasks of surgical assistants, the robot is considered a surrogate artificial work partner and worker. This finding is consistent with prior HRI research indicating that users, through their cooperation with robots, often become less reliant on supportive social actions. This research relates to societal issues and provides the first indication that highly educated knowledge workers are beginning to be replaced by robot technology in working life and therefore points towards a paradigm shift in the service sector.",
"title": ""
},
{
"docid": "91e32e80a6a2f2a504776b9fd86425ca",
"text": "We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "4a2fcdf5394e220a579d1414588a124a",
"text": "In this paper we introduce AR Scratch, the first augmented-reality (AR) authoring environment designed for children. By adding augmented-reality functionality to the Scratch programming platform, this environment allows pre-teens to create programs that mix real and virtual spaces. Children can display virtual objects on a real-world space seen through a camera, and they can control the virtual world through interactions between physical objects. This paper describes the system design process, which focused on appropriately presenting the AR technology to the typical Scratch population (children aged 8-12), as influenced by knowledge of child spatial cognition, programming expertise, and interaction metaphors. Evaluation of this environment is proposed, accompanied by results from an initial pilot study, as well as discussion of foreseeable impacts on the Scratch user community.",
"title": ""
},
{
"docid": "99bc521c438fa804fd43d1755ba0d900",
"text": "The revolutionary potential of massive open online courses (MOOCs) has been met with much skepticism, particularly in terms of the quality of learning offered. Believing that a focus on learning is more important than a focus on course completion rates, this position paper presents a pedagogical assessment of MOOCs using Chickering and Gamson's Seven Principles of Good Practice in Undergraduate Education and Bloom's taxonomy, based on the author's personal experience as a learner in four xMOOCs. Although most xMOOCs have similar characteristics, the author shows that they are not all offered in exactly the same way, and some provide more sound pedagogy that develops higher order thinking, whereas others do not. The author uses this evaluation, as well as reviews of other xMOOCs in the literature, to glean some good pedagogical practices in xMOOCs and areas for improvement.",
"title": ""
},
{
"docid": "fcaf6f7a675cae064d6cd23e291560d1",
"text": "Many novice programmers view programming tools as all-knowing, infallible authorities about what is right and wrong about code. This misconception is particularly detrimental to beginners, who may view the cold, terse, and often judgmental errors from compilers as a sign of personal failure. It is possible, however, that attributing this failure to the computer, rather than the learner, may improve learners' motivation to program. To test this hypothesis, we present Gidget, a game where the eponymous robot protagonist is cast as a fallible character that blames itself for not being able to correctly write code to complete its missions. Players learn programming by working with Gidget to debug its problematic code. In a two-condition controlled experiment, we manipulated Gidget's level of personification in: communication style, sound effects, and image. We tested our game with 116 self-described novice programmers recruited on Amazon's Mechanical Turk and found that, when given the option to quit at any time, those in the experimental condition (with a personable Gidget) completed significantly more levels in a similar amount of time. Participants in the control and experimental groups played the game for an average time of 39.4 minutes (SD=34.3) and 50.1 minutes (SD=42.6) respectively. These finding suggest that how programming tool feedback is portrayed to learners can have a significant impact on motivation to program and learning success.",
"title": ""
},
{
"docid": "00e8c142e7f059c10cd9eabdb78e0120",
"text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.",
"title": ""
},
{
"docid": "e8bd4676b8ee39c1da853553bfc8fabc",
"text": "BACKGROUND\nThe upper face and periocular region is a complex and dynamic part of the face. Successful rejuvenation requires a combination of minimally invasive modalities to fill dents and hollows, resurface rhytides, improve pigmentation, and smooth the mimetic muscles of the face without masking facial expression.\n\n\nMETHODS\nUsing review of the literature and clinical experience, the authors discuss our strategy for combining botulinum toxin, facial filler, ablative laser, intense pulsed light, microfocused ultrasound, and microneedle fractional radiofrequency to treat aesthetic problems of the upper face including brow ptosis, temple volume loss, A-frame deformity of the superior sulcus, and superficial and deep rhytides.\n\n\nRESULTS\nWith attention to safety recommendations, injectable, light, laser, and energy-based treatments can be safely combined in experienced hands to provide enhanced outcomes in the rejuvenation of the upper face.\n\n\nCONCLUSION\nProviding multiple treatments in 1 session improves patient satisfaction by producing greater improvements in a shorter amount of time and with less overall downtime than would be necessary with multiple office visits.",
"title": ""
},
{
"docid": "b9aec323872875009fdc396e50bd9103",
"text": "The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security. The interest in learning-based methods for securityand system-design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties solely using statically designed mechanisms, learning methods are being used more and more to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be evaded by adversaries, who change their behavior in response to the learning methods. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees The Perspectives Workshop, “Machine Learning Methods for Computer Security” was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. As a result of the twenty-two invited presentations, workgroup sessions and informal discussion, several priority areas of research were identified. The open problems identified in the field ranged from traditional applications of machine learning in security, such as attack detection and analysis of malicious software, to methodological issues related to secure learning, especially the development of new formal approaches with provable security guarantees. Finally a number of other potential applications were pinpointed outside of the traditional scope of computer security in which security issues may also arise in connection with data-driven methods. Examples of such applications are social media spam, plagiarism detection, authorship identification, copyright enforcement, computer vision (particularly in the context of biometrics), and sentiment analysis. Perspectives Workshop 09.–14. September, 2012 – www.dagstuhl.de/12371 1998 ACM Subject Classification C.2.0 Computer-Communication Networks (General): Security and Protection (e.g., firewalls), D.4.6 Operating Systems (Security and Protection), I.2.6 Artificial Intelligence (Learning), I.2.7 Artificial Intelligence (Natural Language Processing), I.2.8 Artificial Intelligence (Problem Solving, Control Methods, and Search), K.4.1 Computers and Society (Public Policy Issues): Privacy, K.6.5 Management of Computing and Information Systems (Security and Protection)",
"title": ""
},
{
"docid": "53575c45a60f93c850206f2a467bc8e8",
"text": "We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb.",
"title": ""
},
{
"docid": "4520316ecef3051305e547d50fadbb7a",
"text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.",
"title": ""
},
{
"docid": "5f365973899e33de3052dda238db13c1",
"text": "The global threat to public health posed by emerging multidrug-resistant bacteria in the past few years necessitates the development of novel approaches to combat bacterial infections. Endolysins encoded by bacterial viruses (or phages) represent one promising avenue of investigation. These enzyme-based antibacterials efficiently kill Gram-positive bacteria upon contact by specific cell wall hydrolysis. However, a major hurdle in their exploitation as antibacterials against Gram-negative pathogens is the impermeable lipopolysaccharide layer surrounding their cell wall. Therefore, we developed and optimized an approach to engineer these enzymes as outer membrane-penetrating endolysins (Artilysins), rendering them highly bactericidal against Gram-negative pathogens, including Pseudomonas aeruginosa and Acinetobacter baumannii. Artilysins combining a polycationic nonapeptide and a modular endolysin are able to kill these (multidrug-resistant) strains in vitro with a 4 to 5 log reduction within 30 min. We show that the activity of Artilysins can be further enhanced by the presence of a linker of increasing length between the peptide and endolysin or by a combination of both polycationic and hydrophobic/amphipathic peptides. Time-lapse microscopy confirmed the mode of action of polycationic Artilysins, showing that they pass the outer membrane to degrade the peptidoglycan with subsequent cell lysis. Artilysins are effective in vitro (human keratinocytes) and in vivo (Caenorhabditis elegans). Importance: Bacterial resistance to most commonly used antibiotics is a major challenge of the 21st century. Infections that cannot be treated by first-line antibiotics lead to increasing morbidity and mortality, while millions of dollars are spent each year by health care systems in trying to control antibiotic-resistant bacteria and to prevent cross-transmission of resistance. Endolysins--enzymes derived from bacterial viruses--represent a completely novel, promising class of antibacterials based on cell wall hydrolysis. Specifically, they are active against Gram-positive species, which lack a protective outer membrane and which have a low probability of resistance development. We modified endolysins by protein engineering to create Artilysins that are able to pass the outer membrane and become active against Pseudomonas aeruginosa and Acinetobacter baumannii, two of the most hazardous drug-resistant Gram-negative pathogens.",
"title": ""
},
{
"docid": "432e8e346b2407cef8b6deabeea5d94e",
"text": "Plant-based psychedelics, such as psilocybin, have an ancient history of medicinal use. After the first English language report on LSD in 1950, psychedelics enjoyed a short-lived relationship with psychology and psychiatry. Used most notably as aids to psychotherapy for the treatment of mood disorders and alcohol dependence, drugs such as LSD showed initial therapeutic promise before prohibitive legislature in the mid-1960s effectively ended all major psychedelic research programs. Since the early 1990s, there has been a steady revival of human psychedelic research: last year saw reports on the first modern brain imaging study with LSD and three separate clinical trials of psilocybin for depressive symptoms. In this circumspective piece, RLC-H and GMG share their opinions on the promises and pitfalls of renewed psychedelic research, with a focus on the development of psilocybin as a treatment for depression.",
"title": ""
},
{
"docid": "2b595cab271cac15ea165e46459d6923",
"text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.",
"title": ""
},
{
"docid": "2a6c7baa220e0c4267bebe4ea03a241b",
"text": "Android app repackaging threatens the health of application markets, as repackaged apps, besides stealing revenue for honest developers, are also a source of malware distribution. Techniques that rely on visual similarity of Android apps recently emerged as a way to tackle the repackaging detection problem, as code-based detection techniques often fail in terms of efficiency, and effectiveness when obfuscation is applied [19,21]. Among such techniques, the resource-based repackaging detection approach that compares sets of files included in apks has arguably the best performance [20,17,10]. Yet, this approach has not been previously validated on a dataset of repackaged apps. In this paper we report on our evaluation of the approach, and present substantial improvements to it. Our experiments show that the stateof-art tools applying this technique rely on too restrictive thresholds. Indeed, we demonstrate that a very low proportion of identical resource files in two apps is a reliable evidence for repackaging. Furthermore, we have shown that the Overlap similarity score performs better than the Jaccard similarity coefficient used in previous works. By applying machine learning techniques, we give evidence that considering separately the included resource file types significantly improves the detection accuracy of the method. Experimenting with a balanced dataset of more than 2700 app pairs, we show that with our enhancements it is possible to achieve the F-measure of 0.9919.",
"title": ""
},
{
"docid": "276e88821b94b27703efc078c243e772",
"text": "With the integration of more computational cores and deeper memory hierarchies on modern processors, the performance gap between naively parallel zed code and optimized code becomes much larger than ever before. Very often, bridging the gap involves architecture-specific optimizations. These optimizations are difficult to implement by application programmers, who typically focus on the basic functionality of their code. Therefore, in this thesis, I focus on answering the following research question: \"How can we address architecture-specific optimizations in a programmer-friendly way?'' As an answer, I propose an optimizing framework for parallel applications running on many-core processors (\\textit{Sesame}). Taking a simple parallel zed code provided by the application programmers as input, Sesame chooses and applies the most suitable architecture-specific optimizations, aiming to improve the overall application performance in a user-transparent way. In this short paper, I present the motivation for designing and implementing Sesame, its structure and its modules. Furthermore, I describe the current status of Sesame, discussing our promising results in source-to-source vectorization, automated usage of local memory, and auto-tuning for implementation-specific parameters. Finally, I discuss my work-in-progress and sketch my ideas for finalizing Sesame's development and testing.",
"title": ""
},
{
"docid": "59ec5715b15e3811a0d9010709092d03",
"text": "We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel “bag-of-words” representation, where each frame corresponds to a “word”. Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; secondly, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Secondly, it alleviates the issue of how to choose the appropriate number of latent topics. Thirdly, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different datasets. Our results are either comparable to, or significantly better than previous published results on these datasets. Index Terms —Human action recognition, video analysis, bag-of-words, probabilistic graphical models, event and activity understanding",
"title": ""
},
{
"docid": "dd82e1c54a2b73e98788eb7400600be3",
"text": "Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics has become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires a lot of observations and complex luminance measurements. In this work, we present a novel method for detecting SNeIa simply from single-shot observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with many observations.",
"title": ""
},
{
"docid": "495c2c50f577bb04cd68a9d85cf216bb",
"text": "This paper concentrated on the design and analysis of Neuro-Fuzzy controller based Adaptive Neuro-Fuzzy inference system (ANFIS) architecture for Load frequency control of interconnected areas, to regulate the frequency deviation and power deviations. Any mismatch between generation and demand causes the system frequency to deviate from its nominal value. Thus high frequency deviation may lead to system collapse. So there is necessity robust controller required to maintain the nominal system frequency. The proposed ANFIS controller combines the advantages of fuzzy controller as well as quick response and adaptability nature of artificial neural network however the control technology implemented with sugeno rule to obtain the optimum performance. In order to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters near its optimum. This ANFIS replaces the original conventional proportional Integral (PI) controller and a fuzzy logic (FL) controller were also utilizes the same area criteria error input. The advantage of this controller is that it can handle the nonlinarites at the same time it is faster than other conventional controllers. Simulation results show that the performance of the proposed ANFIS based NeuroFuzzy controller damps out the frequency deviation and reduces the overshoot of the different frequency deviations.",
"title": ""
}
] |
scidocsrr
|
7b42e5fe2898a74a1aabdda96f2b3450
|
The MGB challenge: Evaluating multi-genre broadcast media recognition
|
[
{
"docid": "a16139b8924fc4468086c41fedeef3d9",
"text": "Grapheme-to-phoneme conversion is the task of finding the pronunciation of a word given its written form. It has important applications in text-to-speech and speech recognition. Joint-sequence models are a simple and theoretically stringent probabilistic framework that is applicable to this problem. This article provides a selfcontained and detailed description of this method. We present a novel estimation algorithm and demonstrate high accuracy on a variety of databases. Moreover we study the impact of the maximum approximation in training and transcription, the interaction of model size parameters, n-best list generation, confidence measures, and phoneme-to-grapheme conversion. Our software implementation of the method proposed in this work is available under an Open Source license.",
"title": ""
}
] |
[
{
"docid": "df175c91322be3a87dfba84793e9b942",
"text": "Due to an increasing awareness about dental erosion, many clinicians would like to propose treatments even at the initial stages of the disease. However, when the loss of tooth structure is visible only to the professional eye, and it has not affected the esthetics of the smile, affected patients do not usually accept a full-mouth rehabilitation. Reducing the cost of the therapy, simplifying the clinical steps, and proposing noninvasive adhesive techniques may promote patient acceptance. In this article, the treatment of an ex-bulimic patient is illustrated. A modified approach of the three-step technique was followed. The patient completed the therapy in five short visits, including the initial one. No tooth preparation was required, no anesthesia was delivered, and the overall (clinical and laboratory) costs were kept low. At the end of the treatment, the patient was very satisfied from a biologic and functional point of view.",
"title": ""
},
{
"docid": "a15275cc08ad7140e6dd0039e301dfce",
"text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.",
"title": ""
},
{
"docid": "4b8ee1a2e6d80a0674e2ff8f940d16f9",
"text": "Classification and knowledge extraction from complex spatiotemporal brain data such as EEG or fMRI is a complex challenge. A novel architecture named the NeuCube has been established in prior literature to address this. A number of key points in the implementation of this framework, including modular design, extensibility, scalability, the source of the biologically inspired spatial structure, encoding, classification, and visualisation tools must be considered. A Python version of this framework that conforms to these guidelines has been implemented.",
"title": ""
},
{
"docid": "895b5d767119676e9eb5264eb3e6e7b1",
"text": "This paper presents a preliminary design and analysis of an optimal energy management and control system for a parallel hybrid electric vehicle using hybrid dynamic control system theory and design tools. The vehicle longitudinal dynamics is analyzed. The practical operation modes of the hybrid electric vehicle are introduced with regard to the given power train configuration. In order to synthesize the vehicle continuous dynamics and the discrete transition between the vehicle operation modes, the hybrid dynamical system theory is applied to reformulate such a complex dynamical system in which the interaction of discrete and continuous dynamics are involved. A dynamic programming-based method is developed to determine the optimal power split between both sources of energy. Computer simulation results are presented and demonstrate the effectiveness of the proposed design and applicability and practicality of the design in real-time implementation. Copyright 2002 EVS19",
"title": ""
},
{
"docid": "f7fa13048b42a566d8621f267141f80d",
"text": "The software underpinning today's IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.",
"title": ""
},
{
"docid": "86bb5aab780892d89d7a0057f14fad9f",
"text": "Complicated grief is a prolonged grief disorder with elements of a stress response syndrome. We have previously proposed a biobehavioral model showing the pathway to complicated grief. Avoidance is a component that can be difficult to assess and pivotal to treatment. Therefore we developed an avoidance questionnaire to characterize avoidance among patients with CG. We further explain our complicated grief model and provide results of a study of 128 participants in a treatment study of CG who completed a 15-item Grief-related Avoidance Questionnaire (GRAQ). Mean (SD) GRAQ score was 25. 0 ± 12.5 with a range of 0–60. Cronbach’s alpha was 0.87 and test re-test correlation was 0.88. Correlation analyses showed good convergent and discriminant validity. Avoidance of reminders of the loss contributed to functional impairment after controlling for other symptoms of complicated grief. In this paper we extend our previously described attachment-based biobehavioral model of CG. We envision CG as a stress response syndrome that results from failure to integrate information about death of an attachment figure into an effectively functioning secure base schema and/or to effectively re-engage the exploratory system in a world without the deceased. Avoidance is a key element of the model.",
"title": ""
},
{
"docid": "c5851a9fe60c0127a351668ba5b0f21d",
"text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.",
"title": ""
},
{
"docid": "9cf4d68ab09e98cd5b897308c8791d26",
"text": "Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of the movements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.",
"title": ""
},
{
"docid": "8f444ac95ff664e06e1194dd096e4f31",
"text": "Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.",
"title": ""
},
{
"docid": "99ee1fe74b0b8a9679b8b7bd005d54ab",
"text": "An essential characteristic in many e-commerce settings is that website visitors can have very specific short-term shopping goals when they browse the site. Relying solely on long-term user models that are pre-trained on historical data can therefore be insufficient for a suitable next-basket recommendation. Simple \"real-time\" recommendation approaches based, e.g., on unpersonalized co-occurrence patterns, on the other hand do not fully exploit the available information about the user's long-term preference profile. In this work, we aim to explore and quantify the effectiveness of using and combining long-term models and short-term adaptation strategies. We conducted an empirical evaluation based on a novel evaluation design and two real-world datasets. The results indicate that maintaining short-term content-based and recency-based profiles of the visitors can lead to significant accuracy increases. At the same time, the experiments show that the choice of the algorithm for learning the long-term preferences is particularly important at the beginning of new shopping sessions.",
"title": ""
},
{
"docid": "eaec7fb5490ccabd52ef7b4b5abd25f6",
"text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "e629f1935ab4f69ffaefdaa59b374a05",
"text": "Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and give better results than the compared methods, some of which are based on convex models. In addition, the global convergence of our algorithm can be established in the sense that the gradient of Lagrangian function converges to zero.",
"title": ""
},
{
"docid": "5739713d17ec5cc6952832644b2a1386",
"text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.",
"title": ""
},
{
"docid": "ed8fef21796713aba1a6375a840c8ba3",
"text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.",
"title": ""
},
{
"docid": "87068ab038d08f9e1e386bc69ee8a5b2",
"text": "The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.",
"title": ""
},
{
"docid": "919dc4727575e2ce0419d31b03ddfbf3",
"text": "In wireless ad hoc networks, although defense strategies such as intrusion detection systems (IDSs) can be deployed at each mobile node, significant constraints are imposed in terms of the energy expenditure of such systems. In this paper, we propose a game theoretic framework to analyze the interactions between pairs of attacking/defending nodes using a Bayesian formulation. We study the achievable Nash equilibrium for the attacker/defender game in both static and dynamic scenarios. The dynamic Bayesian game is a more realistic model, since it allows the defender to consistently update his belief on his opponent's maliciousness as the game evolves. A new Bayesian hybrid detection approach is suggested for the defender, in which a lightweight monitoring system is used to estimate his opponent's actions, and a heavyweight monitoring system acts as a last resort of defense. We show that the dynamic game produces energy-efficient monitoring strategies for the defender, while improving the overall hybrid detection power.",
"title": ""
},
{
"docid": "6052c0f2adfe4b75f96c21a5ee128bf5",
"text": "I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \\simulated tempering\", the \\tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the ineeciency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling eeciency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \\deceptive\".",
"title": ""
},
{
"docid": "c76cfe38185146f60a416eedac962750",
"text": "OBJECTIVE\nRepeated public inquiries into child abuse tragedies in Britain demonstrate the level of public concern about the services designed to protect children. These inquiries identify faults in professionals' practice but the similarities in their findings indicate that they are having insufficient impact on improving practice. This study is based on the hypothesis that the recurrent errors may be explicable as examples of the typical errors of human reasoning identified by psychological research.\n\n\nMETHODS\nThe sample comprised all child abuse inquiry reports published in Britain between 1973 and 1994 (45 in total). Using a content analysis and a framework derived from psychological research on reasoning, a study was made of the reasoning of the professionals involved and the findings of the inquiries.\n\n\nRESULTS\nIt was found that professionals based assessments of risk on a narrow range of evidence. It was biased towards the information readily available to them, overlooking significant data known to other professionals. The range was also biased towards the more memorable data, that is, towards evidence that was vivid, concrete, arousing emotion and either the first or last information received. The evidence was also often faulty, due, in the main, to biased or dishonest reporting or errors in communication. A critical attitude to evidence was found to correlate with whether or not the new information supported the existing view of the family. A major problem was that professionals were slow to revise their judgements despite a mounting body of evidence against them.\n\n\nCONCLUSIONS\nErrors in professional reasoning in child protection work are not random but predictable on the basis of research on how people intuitively simplify reasoning processes in making complex judgements. These errors can be reduced if people are aware of them and strive consciously to avoid them. Aids to reasoning need to be developed that recognize the central role of intuitive reasoning but offer methods for checking intuitive judgements more rigorously and systematically.",
"title": ""
},
{
"docid": "634c134b1ec0c9fb985c93a63188308a",
"text": "Automatic processing of metaphor can be clearly divided into two subtasks: metaphor recognition (distinguishing between literal and metaphorical language in a text) and metaphor interpretation (identifying the intended literal meaning of a metaphorical expression). Both of them have been repeatedly addressed in NLP. This paper is the first comprehensive and systematic review of the existing computational models of metaphor, the issues of metaphor annotation in corpora and the available resources.",
"title": ""
}
] |
scidocsrr
|
5f9c253d65839eaef8eb0739a373e449
|
A Bottom Fed Deployable Conical Log Spiral Antenna Design for CubeSat
|
[
{
"docid": "d3e8940329682e447029078916b4e17d",
"text": "In this letter, a new conductive composite tape-spring is proposed for CubeSat deployable antennas that is constructed using a glass fiber reinforced epoxy with an embedded copper alloy conductor. The tape-spring is bistable enabling the antenna to be elastically stable in both the deployed and stowed states. A dipole antenna is designed, simulated, and tested to prove the viability of the electrical properties of this material.",
"title": ""
}
] |
[
{
"docid": "7fafda966819bb780b8b2b6ada4cc468",
"text": "Acne inversa (AI) is a chronic and recurrent inflammatory skin disease. It occurs in intertriginous areas of the skin and causes pain, drainage, malodor and scar formation. While supposedly caused by an autoimmune reaction, bacterial superinfection is a secondary event in the disease process. A unique case of a 43-year-old male patient suffering from a recurring AI lesion in the left axilla was retrospectively analysed. A swab revealed Actinomyces neuii as the only agent growing in the lesion. The patient was then treated with Amoxicillin/Clavulanic Acid 3 × 1 g until he was cleared for surgical excision. The intraoperative swab was negative for A. neuii. Antibiotics were prescribed for another 4 weeks and the patient has remained relapse free for more than 12 months now. Primary cutaneous Actinomycosis is a rare entity and the combination of AI and Actinomycosis has never been reported before. Failure to detect superinfections of AI lesions with slow-growing pathogens like Actinomyces spp. might contribute to high recurrence rates after immunosuppressive therapy of AI. The present case underlines the potentially multifactorial pathogenesis of the disease and the importance of considering and treating potential infections before initiating immunosuppressive regimens for AI patients.",
"title": ""
},
{
"docid": "35cd7373e10d20ac233755a18af8233d",
"text": "Parabolic trough solar technology is the most proven and lowest cost large-scale power technology available today, primarily because of the nine large commercial-s solar power plants that are operating in the California Mojave Desert. These pla developed by Luz International Limited and referred to as Solar Electric Genera Systems (SEGS), range in size from 14 –80 MW and represent 354 MW of installe electric generating capacity. More than 2,000,000 m 2 of parabolic trough collector technology has been operating daily for up to 18 years, and as the year 2001 ended, plants had accumulated 127 years of operational experience. The Luz collector tec ogy has demonstrated its ability to operate in a commercial power plant environmen no other solar technology in the world. Although no new plants have been built s 1990, significant advancements in collector and plant design have been made poss the efforts of the SEGS plants operators, the parabolic trough industry, and solar rese laboratories around the world. This paper reviews the current state of the art of parab trough solar power technology and describes the R&D efforts that are in progres enhance this technology. The paper also shows how the economics of future par trough solar power plants are expected to improve. @DOI: 10.1115/1.1467922 #",
"title": ""
},
{
"docid": "58231becc6c05dd0ce9ebc2ab41d4352",
"text": "This paper presents a novel 2/3 divider cell circuit design for a truly modular programmable frequency divider with high-speed, low-power, and high input-sensitivity features. In this paper, the proposed flip-flop based 2/3 divider cell adopts dynamic E-TSPC circuit that not only reduces power consumption, but also improves operation speed and input sensitivity. The whole design was implemented using the TSMC 0.18 μm 1P6M CMOS process. With an 8-stage 2/3 divider cell, the measurement results indicate that the proposed circuit operates up to 5.8GHz with the power-consumption less than 3.24mW.",
"title": ""
},
{
"docid": "9d27b8d9a5a330cffe93346e6404adcc",
"text": "Research in computer graphics has been in pursuit of realistic image generation for a long time. Recent advances in machine learning with deep generative models have shown increasing success of closing the realism gap by using datadriven and learned components. There is an increasing concern that real and fake images will become more and more difficult to tell apart. We take a first step towards this larger research challenge by asking the question if and to what extend a generated fake image can be attribute to a particular Generative Adversarial Networks (GANs) of a certain architecture and trained with particular data and random seed. Our analysis shows single samples from GANs carry highly characteristic fingerprints which make attribution of images to GANs possible. Surprisingly, this is even possible for GANs with same architecture and same training that only differ by the training seed.",
"title": ""
},
{
"docid": "83bb4e82a591c43f626e8f2e239cfe49",
"text": "Parameter estimation of a continuous-time Markov chain observed through a discrete-time memoryless channel is studied. An expectation-maximization (EM) algorithm for maximum likelihood estimation of the parameter of this hidden Markov process is developed and applied to a simple example of modeling ion-channel currents in living cell membranes. The approach follows that of Asmussen, Nerman and Olsson, and Ryden, for EM estimation of an underlying continuous-time Markov chain.",
"title": ""
},
{
"docid": "3f1a546477d02b09016472574a6f3f6a",
"text": "The paper mainly focusses on an improved voice activity detection algorithm employing long-term signal processing and maximum spectral component tracking. The benefits of this approach have been analyzed in a previous work (Ramirez, J. et al., Proc. EUROSPEECH 2003, p.3041-4, 2003) with clear improvements in speech/non-speech discriminability and speech recognition performance in noisy environments. Two clear aspects are now considered. The first one, which improves the performance of the VAD in low noise conditions, considers an adaptive length frame window to track the long-term spectral components. The second one reduces misclassification errors in highly noisy environments by using a noise reduction stage before the long-term spectral tracking. Experimental results show clear improvements over different VAD methods in speech/pause discrimination and speech recognition performance. Particularly, improvements in recognition rate were reported when the proposed VAD replaced the VADs of the ETSI advanced front-end (AFE) for distributed speech recognition (DSR).",
"title": ""
},
{
"docid": "cc99e806503b158aa8a41753adecd50c",
"text": "Semantic Mutation Testing (SMT) is a technique that aims to capture errors caused by possible misunderstandings of the semantics of a description language. It is intended to target a class of errors which is different from those captured by traditional Mutation Testing (MT). This paper describes our experiences in the development of an SMT tool for the C programming language: SMT-C. In addition to implementing the essential requirements of SMT (generating semantic mutants and running SMT analysis) we also aimed to achieve the following goals: weak MT/SMT for C, good portability between different configurations, seamless integration into test routines of programming with C and an easy to use front-end.",
"title": ""
},
{
"docid": "0bd91a36d282a08759d5e7ad0b2aee97",
"text": "We carry out a systematic study of existing visual CAPTCHAs based on distorted characters that are augmented with anti-segmentation techniques. Applying a systematic evaluation methodology to 15 current CAPTCHA schemes from popular web sites, we find that 13 are vulnerable to automated attacks. Based on this evaluation, we identify a series of recommendations for CAPTCHA designers and attackers, and possible future directions for producing more reliable human/computer distinguishers.",
"title": ""
},
{
"docid": "a61c1e5c1eafd5efd8ee7021613cf90d",
"text": "A millimeter-wave (mmW) bandpass filter (BPF) using substrate integrated waveguide (SIW) is proposed in this work. A BPF with three resonators is formed by etching slots on the top metal plane of the single SIW cavity. The filter is investigated with the theory of electric coupling mechanism. The design procedure and design curves of the coupling coefficient (K) and quality factor (Q) are given and discussed here. The extracted K and Q are used to determine the filter circuit dimensions. In order to prove the validity, a SIW BPF operating at 140 GHz is fabricated in a single circuit layer using low temperature co-fired ceramic (LTCC) technology. The measured insertion loss is 1.913 dB at 140 GHz with a fractional bandwidth of 13.03%. The measured results are in good agreement with simulated results in such high frequency.",
"title": ""
},
{
"docid": "7971ac5a8abaefc2ebc814624b5c8546",
"text": "Multibody structure from motion (SfM) is the extension of classical SfM to dynamic scenes with multiple rigidly moving objects. Recent research has unveiled some of the mathematical foundations of the problem, but a practical algorithm which can handle realistic sequences is still missing. In this paper, we discuss the requirements for such an algorithm, highlight theoretical issues and practical problems, and describe how a static structure-from-motion framework needs to be extended to handle real dynamic scenes. Theoretical issues include different situations in which the number of independently moving scene objects changes: Moving objects can enter or leave the field of view, merge into the static background (e.g., when a car is parked), or split off from the background and start moving independently. Practical issues arise due to small freely moving foreground objects with few and short feature tracks. We argue that all of these difficulties need to be handled online as structure-from-motion estimation progresses, and present an exemplary solution using the framework of probabilistic model-scoring.",
"title": ""
},
{
"docid": "29f820ea99905ad1ee58eb9d534c89ab",
"text": "Basic results in the rigorous theory of weighted dynamical zeta functions or dynamically defined generalized Fredholm determinants are presented. Analytic properties of the zeta functions or determinants are related to statistical properties of the dynamics via spectral properties of dynamical transfer operators, acting on Banach spaces of observables.",
"title": ""
},
{
"docid": "263088de40b85afeb051244de4821a25",
"text": "Deep neural networks (DNN) are powerful models for many pattern recognition tasks, yet they tend to have many layers and many neurons resulting in a high computational complexity. This limits their application to high-performance computing platforms. In order to evaluate a trained DNN on a lower-performance computing platform like a mobile or embedded device, model reduction techniques which shrink the network size and reduce the number of parameters without considerable performance degradation performance are highly desirable. In this paper, we start with a trained fully connected DNN and show how to reduce the network complexity by a novel layerwise pruning method. We show that if some neurons are pruned and the remaining parameters (weights and biases) are adapted correspondingly to correct the errors introduced by pruning, the model reduction can be done almost without performance loss. The main contribution of our pruning method is a closed-form solution that only makes use of the first and second order moments of the layer outputs and, therefore, only needs unlabeled data. Using three benchmark datasets, we compare our pruning method with the low-rank approximation approach.",
"title": ""
},
{
"docid": "3a0d2784b1115e82a4aedad074da8c74",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "697491cc059e471f0c97a840a2a9fca7",
"text": "This paper presents a virtual reality (VR) simulator for four-arm disaster response robot OCTOPUS, which has high capable of both mobility and workability. OCTOPUS has 26 degrees of freedom (DOF) and is currently teleoperated by two operators, so it is quite difficult to operate OCTOPUS. Thus, we developed a VR simulator for training operation, developing operator support system and control strategy. Compared with actual robot and environment, VR simulator can reproduce them at low cost and high efficiency. The VR simulator consists of VR environment and human-machine interface such as operation-input and video- and sound-output, based on robot operation system (ROS) and Gazebo. To enhance work performance, we implement indicators and data collection functions. Four tasks such as rough terrain passing, high-step climbing, obstacle stepping over, and object transport were conducted to evaluate OCTOPUS itself and our VR simulator. The results indicate that operators could complete all the tasks but the success rate differed in tasks. Smooth and stable operations increased the work performance, but sudden change and oscillation of operation degraded it. Cooperating multi-joint adequately is quite important to execute task more efficiently.",
"title": ""
},
{
"docid": "e649c3a48eccb6165320356e94f5ed7d",
"text": "There have been several attempts to create scalable and hardware independent software architectures for Unmanned Aerial Vehicles (UAV). In this work, we propose an onboard architecture for UAVs where hardware abstraction, data storage and communication between modules are efficiently maintained. All processing and software development is done on the UAV while state and mission status of the UAV is monitored from a ground station. The architecture also allows rapid development of mission-specific third party applications on the vehicle with the help of the core module.",
"title": ""
},
{
"docid": "b5c65533fd768b9370d8dc3aba967105",
"text": "Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.",
"title": ""
},
{
"docid": "9b17c6ff30e91f88e52b2db4eb331478",
"text": "Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. After an initial pre-processing phase on data, packets are fed into Deep Packet framework that embeds stacked autoencoder and convolution neural network (CNN) in order to classify network traffic. Deep packet with CNN as its classification model achieved F1 score of 0.95 in application identification task and it also accomplished F1 score of 0.97 in traffic characterization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.",
"title": ""
},
{
"docid": "390e9e2bfb8e94d70d1dbcfbede6dd46",
"text": "Modern software-based services are implemented as distributed systems with complex behavior and failure modes. Many large tech organizations are using experimentation to verify such systems' reliability. Netflix engineers call this approach chaos engineering. They've determined several principles underlying it and have used it to run experiments. This article is part of a theme issue on DevOps.",
"title": ""
},
{
"docid": "05f8bae694ca21d35d6a30fa6fa62f04",
"text": "To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.",
"title": ""
},
{
"docid": "2de04c57a9034cf2b4eb7055b4e150f6",
"text": "This paper presents an online detection-based two-stage multi-object tracking method in dense visual surveillances scenarios with a single camera. In the local stage, a particle filter with observer selection that could deal with partial object occlusion is used to generate a set of reliable tracklets. In the global stage, the detection responses are collected from a temporal sliding window to deal with ambiguity caused by full object occlusion to generate a set of potential tracklets. The reliable tracklets generated in the local stage and the potential tracklets generated within the temporal sliding window are associated by Hungarian algorithm on a modified pairwise tracklets association cost matrix to get the global optimal association. This method is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results prove the effectiveness of our method.",
"title": ""
}
] |
scidocsrr
|
2d0cebbdc9c9e741ec5c838f4769288f
|
Biology of Consciousness
|
[
{
"docid": "6e0081006340e891aead8c7129c09c5c",
"text": "This Introduction to the Special Issue on Human Memory discusses some of the recent and current developments in the study of human memory from the neuropsychological perspective. A problem of considerable current interest, that of multiple memory systems, is a problem in classification. Much of the evidence for it is derived from clinical and experimental observations of dissociations between performances in memory tasks. The distinction between short-term and long-term memory is considered as an example of classification by dissociation. Current conceptualizations of multiple long-term memory systems are reviewed from the vantage point that distinguishes among three major kinds of memory--episodic, semantic, and procedural. These systems are briefly described and compared, and current views concerning the relation between them are discussed. The role of consciousness in memory is raised against the backdrop of the suggestion that it may be necessary to differentiate among several kinds of consciousness.",
"title": ""
}
] |
[
{
"docid": "7a0cec9d0e1f865a639db4f65626b5c2",
"text": "Over the past century, academic performance has become the gatekeeper to institutions of higher education, shaping career paths and individual life trajectories. Accordingly, much psychological research has focused on identifying predictors of academic performance, with intelligence and effort emerging as core determinants. In this article, we propose expanding on the traditional set of predictors by adding a third agency: intellectual curiosity. A series of path models based on a meta-analytically derived correlation matrix showed that (a) intelligence is the single most powerful predictor of academic performance; (b) the effects of intelligence on academic performance are not mediated by personality traits; (c) intelligence, Conscientiousness (as marker of effort), and Typical Intellectual Engagement (as marker of intellectual curiosity) are direct, correlated predictors of academic performance; and (d) the additive predictive effect of the personality traits of intellectual curiosity and effort rival that the influence of intelligence. Our results highlight that a \"hungry mind\" is a core determinant of individual differences in academic achievement.",
"title": ""
},
{
"docid": "1c7a844f1e9e4b38a52db9c518d1b094",
"text": "BACKGROUND\nActive learning (AL) has shown the promising potential to minimize the annotation cost while maximizing the performance in building statistical natural language processing (NLP) models. However, very few studies have investigated AL in a real-life setting in medical domain.\n\n\nMETHODS\nIn this study, we developed the first AL-enabled annotation system for clinical named entity recognition (NER) with a novel AL algorithm. Besides the simulation study to evaluate the novel AL algorithm, we further conducted user studies with two nurses using this system to assess the performance of AL in real world annotation processes for building clinical NER models.\n\n\nRESULTS\nThe simulation results show that the novel AL algorithm outperformed traditional AL algorithm and random sampling. However, the user study tells a different story that AL methods did not always perform better than random sampling for different users.\n\n\nCONCLUSIONS\nWe found that the increased information content of actively selected sentences is strongly offset by the increased time required to annotate them. Moreover, the annotation time was not considered in the querying algorithms. Our future work includes developing better AL algorithms with the estimation of annotation time and evaluating the system with larger number of users.",
"title": ""
},
{
"docid": "4e0a3dd1401a00ddc9d0620de93f4ecc",
"text": "The spatial-numerical association of response codes (SNARC) effect is the tendency for humans to respond faster to relatively larger numbers on the left or right (or with the left or right hand) and faster to relatively smaller numbers on the other side. This effect seems to occur due to a spatial representation of magnitude either in occurrence with a number line (wherein participants respond to relatively larger numbers faster on the right), other representations such as clock faces (responses are reversed from number lines), or culturally specific reading directions, begging the question as to whether the effect may be limited to humans. Given that a SNARC effect has emerged via a quantity judgement task in Western lowland gorillas and orangutans (Gazes et al., Cog 168:312–319, 2017), we examined patterns of response on a quantity discrimination task in American black bears, Western lowland gorillas, and humans for evidence of a SNARC effect. We found limited evidence for SNARC effect in American black bears and Western lowland gorillas. Furthermore, humans were inconsistent in direction and strength of effects, emphasizing the importance of standardizing methodology and analyses when comparing SNARC effects between species. These data reveal the importance of collecting data with humans in analogous procedures when testing nonhumans for effects assumed to bepresent in humans.",
"title": ""
},
{
"docid": "f835e60133415e3ec53c2c9490048172",
"text": "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.",
"title": ""
},
{
"docid": "5a7bcc4d19bd25785943a8c2f2fe0c02",
"text": "Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention. However, many relation types, particularly in biomedical text, are expressed across sentences or require a large context to disambiguate. We propose a model to consider all mention and entity pairs simultaneously in order to make a prediction. We encode full paper abstracts using an efficient self-attention encoder and form pairwise predictions between all mentions with a bi-affine operation. An entity-pair wise pooling aggregates mention pair scores to make a final prediction while alleviating training noise by performing within document multi-instance learning. We improve our model’s performance by jointly training the model to predict named entities and adding an additional corpus of weakly labeled data. We demonstrate our model’s effectiveness by achieving the state of the art on the Biocreative V Chemical Disease Relation dataset for models without KB resources, outperforming ensembles of models which use hand-crafted features and additional linguistic resources.",
"title": ""
},
{
"docid": "160a6db9788c324f1173a49ef667428a",
"text": "Object composition offers significant advantages over class inheritance to develop a flexible software architecture for finiteelement analysis. Using this approach, separate classes encapsulate fundamental finite-element algorithms and interoperate to form and solve the governing nonlinear equations. Communication between objects in the analysis composition is established using software design patterns. Root-finding algorithms, time integration methods, constraint handlers, linear equation solvers, and degree of freedom numberers are implemented as interchangeable components using the Strategy pattern. The Bridge and Factory Method patterns allow objects of the finite-element model to vary independently from objects that implement the numerical solution procedures. The Adapter and Iterator patterns permit equations to be assembled entirely through abstract interfaces that do not expose either the storage of objects in the analysis model or the computational details of the time integration method. Sequence diagrams document the interoperability of the analysis classes for solving nonlinear finite-element equations, demonstrating that object composition with design patterns provides a general approach to developing and refactoring nonlinear finite-element software. DOI: 10.1061/ ASCE CP.1943-5487.0000002 CE Database subject headings: Computer programming; Computer software; Finite element method; Nonlinear analysis. Author keywords: Computer programming; Computer software; Finite element method; Nonlinear analysis. Introduction Performance-based methodologies in structural engineering have increased the need for high-fidelity simulation of structural response under extreme loads, such as earthquake, blast, and other events that may cause damage or lead to progressive collapse Moehle and Deierlein 2004 . Simulation software for performance-based engineering must be able to accommodate sophisticated constitutive models for conventional and novel materials and soils, large displacement analysis methods, and robust solution algorithms for dynamic loads, among many other requirements. The finite-element method provides a general methodology for simulating the response of structural and geotechnical systems to arbitrary loading. To incorporate future developments and specific user needs, simulation software must provide interfaces for new finite-element formulations, solution algorithms, equation solvers, and support for advanced computing, modeling, visualization, and data mining. For example, parallel computing is becoming common in engineering, and structural simulation Research Engineer, Dept. of Civil and Environmental Engineering, Univ. of California, Berkeley, CA 94720. E-mail: fmckenna@ ce.berkeley.edu Assistant Professor, School of Civil and Construction Engineering, Oregon State Univ., Corvallis, OR 97331 corresponding author . E-mail: michael.scott@oregonstate.edu Dean, Cockrell School of Engineering, Univ. of Texas at Austin, Austin, TX 78712. E-mail: dean@engr.utexas.edu Note. This manuscript was submitted on April 21, 2008; approved on November 2, 2008; published online on December 15, 2009. Discussion period open until June 1, 2010; separate discussions must be submitted for individual papers. This paper is part of the Journal of Computing in Civil Engineering, Vol. 24, No. 1, January 1, 2010. ©ASCE, ISSN 0887-3801/2010/1-95–107/$25.00. JOURNAL OF COMPUTING Downloaded 02 Feb 2010 to 128.193.50.33. Redistribution subject to software needs to be able to take advantage of hardware systems that range from multicore processors to massively parallel computers Modak and Sotelino 2002; Peng et al. 2004 . To address these requirements, finite-element simulation software must be designed for computational efficiency, flexibility, extensibility, and portability. The traditional focus of simulation software development has been efficiency, but the other goals are equally important when considering the complete software lifecycle. Flexibility means that software components can be combined to provide new capability, even if it was not anticipated in the original design. Extensibility means that both the design and implementation of software components can be made more specific or to provide additional functionality. Portable software is designed to run on a variety of computer architectures and operating systems to take advantage of new computing capability. To address these needs, this paper presents a new objectoriented architecture in which the goals of flexibility, extensibility, and portability of finite-element software are achieved by emphasizing object composition over implementation inheritance in the software design. The major contribution is the use of composition of software components that implement solution procedures for the nonlinear governing equations of a finite-element model. Object composition is shown to provide a superior software design compared with the more common use of class inheritance. In addition to composition, the software architecture uses software design patterns to organize communication between the components of a nonlinear finite-element analysis. The architecture allows these components to be combined to create customized simulation applications, further enhancing flexibility, extensibility, and portability. The modular nature of the finite-element method results from its mathematical formulation Hughes 1987; Bathe 1996; Zienkiewicz and Taylor 2005 . Several researchers have developed object-oriented software designs and implementations for IN CIVIL ENGINEERING © ASCE / JANUARY/FEBRUARY 2010 / 95 ASCE license or copyright; see http://pubs.asce.org/copyright structural analysis and finite-element methods. The encapsulation of data and methods allows object-oriented programs more flexibility and extensibility than equivalent procedure-oriented programs Rumbaugh et al. 1991; Booch 1994; Sommerville 1995 , which can be exploited in engineering software development Fenves 1990; Baugh and Rehak 1992 . A bibliographic listing of object-oriented finite-element implementations between 1990 and 2003 is given by Mackerle 2004 . Early works Forde et al. 1990; Miller 1991; Mackie 1992 demonstrated that objectoriented structural analysis software has shorter development times and is easier to maintain and extend than procedural software. The main drawback to object-oriented software is the computational expense of dynamic memory management, which can account for up to 30% of program execution time Chang et al. 2001 , and random utilization of the memory heap which can cause excessive page faulting in larger programs. This expense can be mitigated by effective programming techniques such as passing references to objects to avoid the dynamic allocation of temporary objects, which is an important consideration for programs written in C Meyers 1997 . With effective memory management, the increase in computation time for object-oriented finite analysis over procedural implementations ranges from 10 to 15% Dubois-Pelerin and Zimmermann 1993; Rucki and Miller 1996 . Recent work to advance research in performance-based earthquake engineering has been organized around the object-oriented software framework OpenSees for structural and geotechnical simulation applications McKenna et al. 2000 . A software framework is a set of classes that a developer can combine and reuse to create an application. The framework defines the abstract classes and provides many of the concrete classes that implement specific functionality for an application space. The abstract classes define a common interface for all users of the class, e.g., an abstract Element class defines methods to compute and return its resisting forces and tangent stiffness. This set of methods is often referred to as an “abstract interface.” The concrete classes provide the implementation of the methods declared in the abstract class, or if a method has been implemented in the abstract class, the concrete class can override the method by providing its own implementation. <<interface>> ModelBuilder Domai populates",
"title": ""
},
{
"docid": "fdd790d33300c19cb0c340903e503b02",
"text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"title": ""
},
{
"docid": "7944cf00184de7a62c26ba4408106aad",
"text": "We propose incorporating a production rules facility into a relational database system. Such a facility allows definition of database operations that are automatically executed whenever certain conditions are met. In keeping with the set-oriented approach of relational data manipulation languages, our production rules are also set-oriented—they are triggered by sets of changes to the database and may perform sets of changes. The condition and action parts of our production rules may refer to the current state of the database as well as to the sets of changes triggering the rules. We define a syntax for production rule definition as an extension to SQL. A model of system behavior is used to give an exact semantics for production rule execution, taking into account externally-generated operations, self-triggering rules, and simultaneous triggering of multiple rules.",
"title": ""
},
{
"docid": "17a502df854fbf281191c7b068f25d20",
"text": "Reinforcement learning (RL) has had many successes in both “deep” and “shallow” settings. In both cases, significant hyperparameter tuning is often required to achieve good performance. Furthermore, when nonlinear function approximation is used, non-stationarity in the state representation can lead to learning instability. A variety of techniques exist to combat this — most notably large experience replay buffers or the use of multiple parallel actors. These techniques come at the cost of moving away from the online RL problem as it is traditionally formulated (i.e., a single agent learning online without maintaining a large database of training examples). Meta-learning can potentially help with both these issues by tuning hyperparameters online and allowing the algorithm to more robustly adjust to non-stationarity in a problem. This paper applies meta-gradient descent to derive a set of step-size tuning algorithms specifically for online RL control with eligibility traces. Our novel technique, Metatrace, makes use of an eligibility trace analogous to methods like TD(λ). We explore tuning both a single scalar step-size and a separate step-size for each learned parameter. We evaluate Metatrace first for control with linear function approximation in the classic mountain car problem and then in a noisy, non-stationary version. Finally, we apply Metatrace for control with nonlinear function approximation in 5 games in the Arcade Learning Environment where we explore how it impacts learning speed and robustness to initial step-size choice. Results show that the meta-step-size parameter of Metatrace is easy to set, Metatrace can speed learning, and Metatrace can allow an RL algorithm to deal with non-stationarity in the learning task.",
"title": ""
},
{
"docid": "3df8b47d93d6a2e903b44621d7e3ac9f",
"text": "Face recognition is one of the foremost applications in computer vision, which often involves sensitive signals; privacy concerns have been raised lately and tackled by several recent privacy-preserving face recognition approaches. Those systems either take advantage of information derived from the database templates or require several interaction rounds between client and server, so they cannot address outsourced scenarios. We present a private face verification system that can be executed in the server without interaction, working with encrypted feature vectors for both the templates and the probe face. We achieve this by combining two significant contributions: 1) a novel feature model for Gabor coefficients' magnitude driving a Lloyd-Max quantizer, used for reducing plaintext cardinality with no impact on performance; 2) an extension of a quasi-fully homomorphic encryption able to compute, without interaction, the soft scores of an SVM operating on quantized and encrypted parameters, features and templates. We evaluate the private verification system in terms of time and communication complexity, and in verification accuracy in widely known face databases (XM2VTS, FERET, and LFW). These contributions open the door to completely private and noninteractive outsourcing of face verification.",
"title": ""
},
{
"docid": "5a7d3bfaae94ee144153369a5d23a0a4",
"text": "This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.",
"title": ""
},
{
"docid": "aec23c23dfb209513fe804a2558cd087",
"text": "In recent years, STT-RAMs have been proposed as a promising replacement for SRAMs in on-chip caches. Although STT-RAMs benefit from high-density, non-volatility, and low-power characteristics, high rates of read disturbances and write failures are the major reliability problems in STTRAM caches. These disturbance/failure rates are directly affected not only by workload behaviors, but also by process variations. Several studies characterized the reliability of STTRAM caches just for one cell, but vulnerability of STT-RAM caches cannot be directly derived from these models. This paper extrapolates the reliability characteristics of one STTRAM cell presented in previous studies to the vulnerability analysis of STT-RAM caches. To this end, we propose a highlevel framework to investigate the vulnerability of STT-RAM caches affected by the per-cell disturbance/failure rates as well as the workloads behaviors and process variations. This framework is an augmentation of gem5 simulator. The investigation reveals that: 1) the read disturbance rate in a cache varies by 6 orders of magnitude for different workloads, 2) the write failure rate varies by 4 orders of magnitude for different workloads, and 3) the process variations increase the read disturbance and write failure rates by up to 5.8x and 8.9x, respectively.",
"title": ""
},
{
"docid": "f5c0bd81c589382ac78675902ce2969d",
"text": "The candela, one of the SI base units, has been realized by using absolutely calibrated detectors rather than sources. A group of eight photometers was constructed using silicon photodiodes, precision apertures, and glass filters for V (λ) match. Their absolute spectral responsivities were calibrated against the NIST absolute spectral responsivity scale. The measurement chain has been significantly shortened compared with the old scale based on a blackbody. This resulted in improving the calibration uncertainty to 0.46% (2σ), a factor-of-2 improvement. This revision has made various photometric calibrations at NIST more versatile and flexible. Luminous intensities of light sources ranging from 10-3 to 104 candelas are directly calibrated with the standard photometers, which have a linear response over that range. Illuminance meters are calibrated directly against the standard photometers. A luminance scale has also been realized on the detector base using an integrating sphere source. Total flux ranging from 10-2 to 105 lumens can be measured in a 2 m integrating sphere using a photometer with a wide dynamic range. The revisions of the calibration procedures significantly improved the calibration uncertainty.",
"title": ""
},
{
"docid": "463c6bb86f81d0f0e19427772add1a22",
"text": "Administrative burden represents the costs to businesses, citizens and the administration itself of complying with government regulations and procedures. The burden tends to increase with new forms of public governance that rely less on direct decisions and actions undertaken by traditional government bureaucracies, and more on government creating and regulating the environment for other, non-state actors to jointly address public needs. Based on the reviews of research and policy literature, this paper explores administrative burden as a policy problem, presents how Digital Government (DG) could be applied to address this problem, and identifies societal adoption, organizational readiness and other conditions under which DG can be an effective tool for Administrative Burden Reduction (ABR). Finally, the paper tracks ABR to the latest Contextualization stage in the DG evolution, and discusses possible development approaches and technological potential of pursuing ABR through DG.",
"title": ""
},
{
"docid": "3a4b9578345f0c1ac7a3cc194c783ed0",
"text": "Current studies of influence maximization focus almost exclusively on unsigned social networks ignoring the polarities of the relationships between users. Influence maximization in signed social networks containing both positive relationships (e.g., friend or like) and negative relationships (e.g., enemy or dislike) is still a challenging problem which remains much open. A few studies made use of greedy algorithms to solve the problem of positive influence or negative influence maximization in signed social networks. Although greedy algorithm is able to achieve a good approximation, it is computational expensive and not efficient enough. Aiming at this drawback, we propose an alternative method based on Simulated Annealing (SA) for the positive influence maximization problem in this paper. Additionally, we also propose two heuristics to speed up the convergence process of the proposed method. Comprehensive experiments results on three signed social network datasets, Epinions, Slashdot and Wikipedia, demonstrate that our method can yield similar or better performance than the greedy algorithms in terms of positive influence spread but run faster.",
"title": ""
},
{
"docid": "adc0de2a4c4baf4fdd35ff5a585550ef",
"text": "Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency. A variety of other algorithms such as RAML, SPG, and data noising, have also been developed from different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning. Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "d2f78c78329eb290ad9c2ee368c76c6a",
"text": "Narrative intelligence is an important part of human cognition, especially in sensemaking and communicating with people. Humans draw on a lifetime of relevant experiences to explain stories, to tell stories, and to help choose the most appropriate actions in real-life settings. Manual authoring the required knowledge presents a significant bottleneck in the creation of systems demonstrating narrative intelligence. In this paper, we describe a novel technique for automatically learning script-like narrative knowledge from crowdsourcing. By leveraging human workers’ collective understanding of social and procedural constructs, we can learn a potentially unlimited range of scripts regarding how real-world situations unfold. We present quantitative evaluations of the learned primitive events and the temporal ordering of events, which suggest we can identify orderings between events with high accuracy.",
"title": ""
},
{
"docid": "4354df503e85911040e2f438024f16f3",
"text": "This paper proposes a Hybrid Approximate Representation (HAR) based on unifying several efficient approximations of the generalized reprojection error (which is known as the <italic>gold standard</italic> for multiview geometry). The HAR is an over-parameterization scheme where the approximation is applied simultaneously in multiple parameter spaces. A joint minimization scheme “HAR-Descent” can then solve the PnP problem efficiently, while remaining robust to approximation errors and local minima. The technique is evaluated extensively, including numerous synthetic benchmark protocols and the real-world data evaluations used in previous works. The proposed technique was found to have runtime complexity comparable to the fastest <inline-formula><tex-math notation=\"LaTeX\">$O(n)$</tex-math><alternatives><inline-graphic xlink:href=\"hadfield-ieq1-2806446.gif\"/></alternatives></inline-formula> techniques, and up to 10 times faster than current state of the art minimization approaches. In addition, the accuracy exceeds that of all 9 previous techniques tested, providing definitive state of the art performance on the benchmarks, across all 90 of the experiments in the paper and supplementary material, which can be found on the Computer Society Digital Library at <uri>http://doi.ieeecomputersociety.org/10.1109/TPAMI.2018.2806446</uri>.",
"title": ""
},
{
"docid": "2ca54e2e53027eb2ff441f0e2724d68f",
"text": "Thanks to rapid advances in technologies like GPS and Wi-Fi positioning, smartphone users are able to determine their location almost everywhere they go. This is not true, however, of people who are traveling in underground public transportation networks, one of the few types of high-traffic areas where smartphones do not have access to accurate position information. In this paper, we introduce the problem of underground transport positioning on smartphones and present SubwayPS, an accelerometer-based positioning technique that allows smartphones to determine their location substantially better than baseline approaches, even deep beneath city streets. We highlight several immediate applications of positioning in subway networks in domains ranging from mobile advertising to mobile maps and present MetroNavigator, a proof-of-concept smartphone and smartwatch app that notifies users of upcoming points-of-interest and alerts them when it is time to get ready to exit the train.",
"title": ""
}
] |
scidocsrr
|
51d40a551fe4d76c1975e2c1893481ab
|
Knowledge Enhanced Hybrid Neural Network for Text Matching
|
[
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "7e06f62814a2aba7ddaff47af62c13b4",
"text": "Natural language conversation is widely regarded as a highly difficult problem, which is usually attacked with either rule-based or learning-based models. In this paper we propose a retrieval-based automatic response model for short-text conversation, to exploit the vast amount of short conversation instances available on social media. For this purpose we introduce a dataset of short-text conversation based on the real-world instances from Sina Weibo (a popular Chinese microblog service), which will be soon released to public. This dataset provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models. This dataset consists of both naturally formed conversations, manually labeled data, and a large repository of candidate responses. Our preliminary experiments demonstrate that the simple retrieval-based conversation model performs reasonably well when combined with the rich instances in our dataset.",
"title": ""
}
] |
[
{
"docid": "f8275a80021312a58c9cd52bbcd4c431",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "467d48d121ee8b9f792dbfbc7e281cc1",
"text": "This paper focuses on improving face recognition performance with a new signature combining implicit facial features with explicit soft facial attributes. This signature has two components: the existing patch-based features and the soft facial attributes. A deep convolutional neural network adapted from state-of-the-art networks is used to learn the soft facial attributes. Then, a signature matcher is introduced that merges the contributions of both patch-based features and the facial attributes. In this matcher, the matching scores computed from patch-based features and the facial attributes are combined to obtain a final matching score. The matcher is also extended so that different weights are assigned to different facial attributes. The proposed signature and matcher have been evaluated with the UR2D system on the UHDB31 and IJB-A datasets. The experimental results indicate that the proposed signature achieve better performance than using only patch-based features. The Rank-1 accuracy is improved significantly by 4% and 0.37% on the two datasets when compared with the UR2D system.",
"title": ""
},
{
"docid": "2bd15d743690c8bcacb0d01650759d62",
"text": "With the large amount of available data and the variety of features they offer, electronic health records (EHR) have gotten a lot of interest over recent years, and start to be widely used by the machine learning and bioinformatics communities. While typical numerical fields such as demographics, vitals, lab measurements, diagnoses and procedures, are natural to use in machine learning models, there is no consensus yet on how to use the free-text clinical notes. We show how embeddings can be learned from patients’ history of notes, at the word, note and patient level, using simple neural and sequence models. We show on various relevant evaluation tasks that these embeddings are easily transferable to smaller problems, where they enable accurate predictions using only clinical notes.",
"title": ""
},
{
"docid": "3f569eccc71c6186d6163a2cc40be0fc",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "ecf16ddb27cb5bebe59ce0cb26d5b861",
"text": "Shoham, Y., Agent-oriented programming, Artificial Intelligence 60 (1993) 51-92. A new computational framework is presented, called agent-oriented programming (AOP), which can be viewed as a specialization of object-oriented programming. The state of an agent consists of components such as beliefs, decisions, capabilities, and obligations; for this reason the state of an agent is called its mental state. The mental state of agents is described formally in an extension of standard epistemic logics: beside temporalizing the knowledge and belief operators, AOP introduces operators for obligation, decision, and capability. Agents are controlled by agent programs, which include primitives for communicating with other agents. In the spirit of speech act theory, each communication primitive is of a certain type: informing, requesting, offering, and so on. This article presents the concept of AOP, discusses the concept of mental state and its formal underpinning, defines a class of agent interpreters, and then describes in detail a specific interpreter that has been implemented.",
"title": ""
},
{
"docid": "b86491e112e4c4a31e805d8c739644b7",
"text": "Fashion is an integral part of life. Streets as a social center for people's interaction become the most important public stage to showcase the fashion culture of a metropolitan area. In this paper, therefore, we propose a novel framework based on deep neural networks (DNN) for depicting the street fashion of a city by automatically discovering fashion items (e.g., jackets) in a particular look that are most iconic for the city, directly from a large collection of geo-tagged street fashion photos. To obtain a reasonable collection of iconic items, our task is formulated as the prize-collecting Steiner tree (PCST) problem, whereby a visually intuitive summary of the world's iconic street fashion can be created. To the best of our knowledge, this is the first work devoted to investigate the world's fashion landscape in modern times through the visual analytics of big social data. It shows how the visual impression of local fashion cultures across the world can be depicted, modeled, analyzed, compared, and exploited. In the experiments, our approach achieves the best performance (43.19%) on our large collected GSFashion dataset (170K photos), with an average of two times higher than all the other algorithms (FII: 20.13%, AP: 18.76%, DC: 17.90%), in terms of the users' agreement ratio on the discovered iconic fashion items of a city. The potential of our proposed framework for advanced sociological understanding is also demonstrated via practical applications.",
"title": ""
},
{
"docid": "c435c4106b1b5c90fe3ff607bc0d5f00",
"text": "In recent years, we have witnessed a significant growth of “social computing” services, or online communities where users contribute content in various forms, including images, text or video. Content contribution from members is critical to the viability of these online communities. It is therefore important to understand what drives users to share content with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with users’ photo sharing in an online community, drawing on motivation theories as well as on analysis of basic structural properties. Our results indicate that photo sharing declines in respect to the users’ tenure in the community. We also show that users with higher commitment to the community and greater “structural embeddedness” tend to share more content. We demonstrate that the motivation of self-development is negatively related to photo sharing, and that tenure in the community moderates the effect of self-development on photo sharing. Directions for future research, as well as implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "af3fe6b35f345a604d06999b06623072",
"text": "Cross-language plagiarism detection deals with the automatic identification and extraction of plagiarism in a multilingual setting. In this setting, a suspicious document is given, and the task is to retrieve all sections from the document that originate from a large, multilingual document collection. Our contributions in this field are as follows: (i) a comprehensive retrieval process for cross-language plagiarism detection is introduced, highlighting the differences to monolingual plagiarism detection, (ii) state-of-the-art solutions for two important subtasks are reviewed, (iii) retrieval models for the assessment of cross-language similarity are surveyed, and, (iv) the three models CL-CNG, CL-ESA and CL-ASA are compared. Our evaluation is of realistic scale: it relies on 120 000 test documents which are selected from the corpora JRC-Acquis and Wikipedia, so that for each test document highly similar documents are available in all of the 6 languages English, German, Spanish, French, Dutch, and Polish. The models are employed in a series of ranking tasks, and more than 100 million similarities are computed with each model. The results of our evaluation indicate that CL-CNG, despite its simple approach, is the best choice to rank and compare texts across languages if they are syntactically related. CL-ESA almost matches the performance of CL-CNG, but on arbitrary pairs of languages. CL-ASA works best on “exact” translations but does not generalize well.",
"title": ""
},
{
"docid": "7c1b301e45da5af0f5248f04dbf33f75",
"text": "[1] We invert 115 differential interferograms derived from 47 synthetic aperture radar (SAR) scenes for a time-dependent deformation signal in the Santa Clara valley, California. The time-dependent deformation is calculated by performing a linear inversion that solves for the incremental range change between SAR scene acquisitions. A nonlinear range change signal is extracted from the ERS InSAR data without imposing a model of the expected deformation. In the Santa Clara valley, cumulative land uplift is observed during the period from 1992 to 2000 with a maximum uplift of 41 ± 18 mm centered north of Sunnyvale. Uplift is also observed east of San Jose. Seasonal uplift and subsidence dominate west of the Silver Creek fault near San Jose with a maximum peak-to-trough amplitude of 35 mm. The pattern of seasonal versus long-term uplift provides constraints on the spatial and temporal characteristics of water-bearing units within the aquifer. The Silver Creek fault partitions the uplift behavior of the basin, suggesting that it acts as a hydrologic barrier to groundwater flow. While no tectonic creep is observed along the fault, the development of a low-permeability barrier that bisects the alluvium suggests that the fault has been active since the deposition of Quaternary units.",
"title": ""
},
{
"docid": "3c8d59590b328e0b4ab6b856721009aa",
"text": "Mobile augmented reality (MAR) enabled devices have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. Location and proximity technologies combined with detailed mapping allow effective navigation. Visual analysis software and growing image databases enable object recognition. Advanced graphics capabilities bring sophisticated presentation of the user interface. These capabilities together allow for real-time melding of the physical and the virtual worlds and can be used for information overlay of the user’s environment for various purposes such as entertainment, tourist assistance, navigation assistance, and education [ 1 ] . In designing for MAR applications it is very important to understand the context in which the information has to be presented. Past research on information presentation on small form factor computing has highlighted the importance of presenting the right information in the right way to effectively engage the user [ 2– 4 ] . The screen space that is available on a small form factor is limited, and having augmented information presented as an overlay poses very interesting challenges. MAR usages involve devices that are able to perceive the context of the user based on the location and other sensor based information. In their paper on “ContextAware Pervasive Systems: Architectures for a New Breed of Applications”, Loke [ 5 ] ,",
"title": ""
},
{
"docid": "ed7ce515d15c506ddcaab29fdc7eab01",
"text": "Normally, the primary purpose of an information display is to convey information. If information displays can be aesthetically interesting, that might be an added bonus. This paper considers an experiment in reversing this imperative. It describes the Kandinsky system which is designed to create displays which are first aesthetically interesting, and then as an added bonus, able to convey information. The Kandinsky system works on the basis of aesthetic properties specified by an artist (in a visual form). It then explores a space of collages composed from information bearing images, using an optimization technique to find compositions which best maintain the properties of the artist's aesthetic expression.",
"title": ""
},
{
"docid": "7e22005412f4e7e924103102cbcb7374",
"text": "Most of the clustering algorithms are based on Euclidean distance as measure of similarity between data objects. Theses algorithms also require initial setting of parameters as a prior, for example the number of clusters. The Euclidean distance is very sensitive to scales of variables involved and independent of correlated variables. To conquer these drawbacks a hybrid clustering algorithm based on Mahalanobis distance is proposed in this paper. The reason for the hybridization is to relieve the user from setting the parameters in advance. The experimental results of the proposed algorithm have been presented for both synthetic and real datasets. General Terms Data Mining, Clustering, Pattern Recognition, Algorithms.",
"title": ""
},
{
"docid": "74fd30bb5ef306968dcf05e5ea32c9d6",
"text": "Depth of field is the swath through a 3D scene that is imaged in acceptable focus through an optics system, such as a camera lens. Control over depth of field is an important artistic tool that can be used to emphasize the subject of a photograph. In a real camera, the control over depth of field is limited by the laws of physics and by physical constraints. The depth of field effect has been simulated in computer graphics, but with the same limited control as found in real camera lenses. In this report, we use anisotropic diffusion to generalize depth of field in computer graphics by allowing the user to independently specify the degree of blur at each point in three-dimensional space. Generalized depth of field provides a novel tool to emphasize an area of interest within a 3D scene, to pick objects out of a crowd, and to render a busy, complex picture more understandable by focusing only on relevant details that may be scattered throughout the scene. Our algorithm operates by blurring a sequence of nonplanar layers that form the scene. Choosing a suitable blur algorithm for the layers is critical; thus, we develop appropriate blur semantics such that the blur algorithm will properly generalize depth of field. We found that anisotropic diffusion is the process that best suits these semantics.",
"title": ""
},
{
"docid": "8cc28165debbb8cc430dc78098c0cd87",
"text": "Aaron Kravitz, for their help with the data collection. We are grateful to Ole-Kristian Hope, Jan Mahrt-Smith, and seminar participants at the University of Toronto for useful comments. Abstract Managers make different decisions in countries with poor protection of investor rights and poor financial development. One possible explanation is that shareholder-wealth maximizing managers face different tradeoffs in such countries (the tradeoff theory). Alternatively, firms in such countries are less likely to be managed for the benefit of shareholders because the poor protection of investor rights makes it easier for management and controlling shareholders to appropriate corporate resources for their own benefit (the agency costs theory). Holdings of liquid assets by firms across countries are consistent with Keynes' transaction and precautionary demand for money theories. Firms in countries with greater GDP per capita hold more cash as predicted. Controlling for economic development, firms in countries with more risk and with poor protection of investor rights hold more cash. The tradeoff theory and the agency costs theory can both explain holdings of liquid assets across countries. However, the fact that a dollar of cash is worth less than $0.65 to the minority shareholders of firms in such countries but worth approximately $1 in countries with good protection of investor rights and high financial development is only consistent with the agency costs theory. 2 1. Introduction Recent work shows that countries where institutions that protect investor rights are weak perform poorly along a number of dimensions. In particular, these countries have lower growth, less well-developed financial markets, and more macroeconomic volatility. 1 To measure the quality of institutions, authors have used, for instance, indices of the risk of expropriation, the level of corruption, and the rule of law. Since poor institutions could result from poor economic performance rather than cause it, authors have also used the origin of a country's legal system (La 2003) as instruments for the quality of institutions. For the quality of institutions to matter for economic performance, it has to affect the actions of firms and individuals. Recent papers examine how dividend, investment, asset composition, and capital structure policies are related to the quality of institutions. 2 In this paper, we focus more directly on why firm policies depend on the quality of institutions. The quality of institutions can affect firm policies for two different reasons. First, a country's protection of investor rights may influence the relative prices or …",
"title": ""
},
{
"docid": "6fa41378af62791731e17db2ea1115b6",
"text": "The amount of graph-structured data has recently experienced an enormous growth in many applications. To transform such data into useful information, fast analytics algorithms and software tools are necessary. One common graph analytics kernel is disjoint community detection (or graph clustering). Despite extensive research on heuristic solvers for this task, only few parallel codes exist, although parallelism will be necessary to scale to the data volume of real-world applications. We address the deficit in computing capability by a flexible and extensible community detection framework with shared-memory parallelism. Within this framework we design and implement efficient parallel community detection heuristics: A parallel label propagation scheme; the first large-scale parallelization of the well-known Louvain method, as well as an extension of the method adding refinement; and an ensemble scheme combining the above. In extensive experiments driven by the algorithm engineering paradigm, we identify the most successful parameters and combinations of these algorithms. We also compare our implementations with state-of-the-art competitors. The processing rate of our fastest algorithm often reaches 50 M edges/second. We recommend the parallel Louvain method and our variant with refinement as both qualitatively strong and fast. Our methods are suitable for massive data sets with billions of edges. (A preliminary version of this paper appeared in Proceedings of the 42nd International Conference on Parallel Processing (ICPP 2013) [35].)",
"title": ""
},
{
"docid": "7a8ebc7696b05bab262e168beaba45a8",
"text": "Understanding the plant-pathogen interactions is of utmost importance to design strategies for minimizing the economic deficits caused by pathogens in crops. With an aim to identify genes underlying resistance to downy mildew, a major disease responsible for productivity loss in pearl millet, transcriptome analysis was performed in downy mildew resistant and susceptible genotypes upon infection and control on 454 Roche NGS platform. A total of ~685 Mb data was obtained with 1 575 290 raw reads. The raw reads were pre-processed into high-quality (HQ) reads making to ~82% with an average of 427 bases. The assembly was optimized using four assemblers viz. Newbler, MIRA, CLC and Trinity, out of which MIRA with a total of 14.10 Mb and 90118 transcripts proved to be the best for assembling reads. Differential expression analysis depicted 1396 and 936 and 1000 and 1591 transcripts up and down regulated in resistant inoculated/resistant control and susceptible inoculated/susceptible control respectively with a common of 3644 transcripts. The pathways for secondary metabolism, specifically the phenylpropanoid pathway was up-regulated in resistant genotype. Transcripts up-regulated as a part of defense response included classes of R genes, PR proteins, HR induced proteins and plant hormonal signaling transduction proteins. The transcripts for skp1 protein, purothionin, V type proton ATPase were found to have the highest expression in resistant genotype. Ten transcripts, selected on the basis of their involvement in defense mechanism were validated with qRT-PCR and showed positive co-relation with transcriptome data. Transcriptome analysis evoked potentials of hypersensitive response and systemic acquired resistance as possible mechanism operating in defense mechanism in pearl millet against downy mildew infection.",
"title": ""
},
{
"docid": "000652922defcc1d500a604d43c8f77b",
"text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.",
"title": ""
},
{
"docid": "97af9704b898bebe4dae43c1984bc478",
"text": "In earlier work we have shown that adults, young children, and infants are capable of computing transitional probabilities among adjacent syllables in rapidly presented streams of speech, and of using these statistics to group adjacent syllables into word-like units. In the present experiments we ask whether adult learners are also capable of such computations when the only available patterns occur in non-adjacent elements. In the first experiment, we present streams of speech in which precisely the same kinds of syllable regularities occur as in our previous studies, except that the patterned relations among syllables occur between non-adjacent syllables (with an intervening syllable that is unrelated). Under these circumstances we do not obtain our previous results: learners are quite poor at acquiring regular relations among non-adjacent syllables, even when the patterns are objectively quite simple. In subsequent experiments we show that learners are, in contrast, quite capable of acquiring patterned relations among non-adjacent segments-both non-adjacent consonants (with an intervening vocalic segment that is unrelated) and non-adjacent vowels (with an intervening consonantal segment that is unrelated). Finally, we discuss why human learners display these strong differences in learning differing types of non-adjacent regularities, and we conclude by suggesting that these contrasts in learnability may account for why human languages display non-adjacent regularities of one type much more widely than non-adjacent regularities of the other type.",
"title": ""
},
{
"docid": "2ad79b7f6d2c3e6c3aa46fed256ee1cc",
"text": "Emotions like regret and envy share a common origin: they are motivated by the counterfactual thinking of what would have happened had we made a different choice. When we contemplate the outcome of a choice we made, we may use the information on the outcome of a choice we did not make. Regret is the purely private comparison between two choices that we could have taken, envy adds to this the information on outcome of choices of others. However, envy has a distinct social component, in that it adds the change in the social ranking that follows a difference in the outcomes. We study the theoretical foundation and the experimental test of this view.",
"title": ""
}
] |
scidocsrr
|
7e927e8913d28f1686363aa10c7ae676
|
JPE 10-6-13 DSP Based Series-Parallel Connected Two Full-Bridge DC-DC Converter with Interleaving Output Current Sharing
|
[
{
"docid": "bffe1ca96f4d6d3eff5ae2db40728284",
"text": "This paper discusses the general control problems of dc/dc converters connected in series at the input. As the input voltage is shared by a number of dc/dc converters, the resulting converter relieves the voltage stresses of individual devices and hence is suitable for high input-voltage applications. At the output side, parallel connection provides current sharing and is suitable for high output-current applications. Moreover, series connection at the output side is also possible, resulting in output voltage sharing. Theoretically, from a power balance consideration, one can show that fulfillment of input-voltage sharing implies fulfillment of output-current or of output-voltage sharing, and vice versa. However, the presence of right-half-plane poles can cause instability when the sharing is implemented at the output side. As a consequence, control should be directed to input-voltage sharing in order to ensure a stable sharing of the input voltage and of the output current (parallel connection at output) or output voltage (series connection at output). In this paper, general problems in input-series connected converter systems are addressed. Minimal control structures are then derived and some practical design considerations are discussed in detail. Illustrative examples are given for addressing these general control considerations. Finally, experimental prototypes are built to validate these considerations.",
"title": ""
}
] |
[
{
"docid": "05532f05f969c6db5744e5dd22a6fbe4",
"text": "Lamellipodia, filopodia and membrane ruffles are essential for cell motility, the organization of membrane domains, phagocytosis and the development of substrate adhesions. Their formation relies on the regulated recruitment of molecular scaffolds to their tips (to harness and localize actin polymerization), coupled to the coordinated organization of actin filaments into lamella networks and bundled arrays. Their turnover requires further molecular complexes for the disassembly and recycling of lamellipodium components. Here, we give a spatial inventory of the many molecular players in this dynamic domain of the actin cytoskeleton in order to highlight the open questions and the challenges ahead.",
"title": ""
},
{
"docid": "75aa71e270d85df73fa97336d2a6b713",
"text": "Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.",
"title": ""
},
{
"docid": "161e962e8e68a941324ec7b20b0ae877",
"text": "The number of malicious programs has grown both in number and in sophistication. Analyzing the malicious intent of vast amounts of data requires huge resources and thus, effective categorization of malware is required. In this paper, the content of a malicious program is represented as an entropy stream, where each value describes the amount of entropy of a small chunk of code in a specific location of the file. Wavelet transforms are then applied to this entropy signal to describe the variation in the entropic energy. Motivated by the visual similarity between streams of entropy of malicious software belonging to the same family, we propose a file agnostic deep learning approach for categorization of malware. Our method exploits the fact that most variants are generated by using common obfuscation techniques and that compression and encryption algorithms retain some properties present in the original code. This allows us to find discriminative patterns that almost all variants in a family share. Our method has been evaluated using the data provided by Microsoft for the BigData Innovators Gathering Anti-Malware Prediction Challenge, and achieved promising results in comparison with the State of the Art.",
"title": ""
},
{
"docid": "1ca39ff80d1595ed4c9d8b1e04bc25be",
"text": "BACKGROUND\nAeromonas species are common inhabitants of aquatic environments giving rise to infections in both fish and humans. Identification of aeromonads to the species level is problematic and complex due to their phenotypic and genotypic heterogeneity.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nAeromonas hydrophila or Aeromonas sp were genetically re-identified using a combination of previously published methods targeting GCAT, 16S rDNA and rpoD genes. Characterization based on the genus specific GCAT-PCR showed that 94 (96%) of the 98 strains belonged to the genus Aeromonas. Considering the patterns obtained for the 94 isolates with the 16S rDNA-RFLP identification method, 3 clusters were recognised, i.e. A. caviae (61%), A. hydrophila (17%) and an unknown group (22%) with atypical RFLP restriction patterns. However, the phylogenetic tree constructed with the obtained rpoD sequences showed that 47 strains (50%) clustered with the sequence of the type strain of A. aquariorum, 18 (19%) with A. caviae, 16 (17%) with A. hydrophila, 12 (13%) with A. veronii and one strain (1%) with the type strain of A. trota. PCR investigation revealed the presence of 10 virulence genes in the 94 isolates as: lip (91%), exu (87%), ela (86%), alt (79%), ser (77%), fla (74%), aer (72%), act (43%), aexT (24%) and ast (23%).\n\n\nCONCLUSIONS/SIGNIFICANCE\nThis study emphasizes the importance of using more than one method for the correct identification of Aeromonas strains. The sequences of the rpoD gene enabled the unambiguous identication of the 94 Aeromonas isolates in accordance with results of other recent studies. Aeromonas aquariorum showed to be the most prevalent species (50%) containing an important subset of virulence genes lip/alt/ser/fla/aer. Different combinations of the virulence genes present in the isolates indicate their probable role in the pathogenesis of Aeromonas infections.",
"title": ""
},
{
"docid": "c7c63f08639660f935744309350ab1e0",
"text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.",
"title": ""
},
{
"docid": "438b86a2b0dcc69f524e9f871a985158",
"text": "Good representations of data do help in many machine learning tasks such as recommendation. It is often a great challenge for traditional recommender systems to learn representative features of both users and images in large social networks, in particular, social curation networks, which are characterized as the extremely sparse links between users and images, and the extremely diverse visual contents of images. To address the challenges, we propose a novel deep model which learns the unified feature representations for both users and images. This is done by transforming the heterogeneous user-image networks into homogeneous low-dimensional representations, which facilitate a recommender to trivially recommend images to users by feature similarity. We also develop a fast online algorithm that can be easily scaled up to large networks in an asynchronously parallel way. We conduct extensive experiments on a representative subset of Pinterest, containing 1,456,540 images and 1,000,000 users. Results of image recommendation experiments demonstrate that our feature learning approach significantly outperforms other state-of-the-art recommendation methods.",
"title": ""
},
{
"docid": "3d34dc15fa11e723a52b21dc209a939f",
"text": "Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.",
"title": ""
},
{
"docid": "6c58cfbdbb424f1e2ad35339e7ee7aa6",
"text": "We present a theoretical model of a multi-input arrayed waveguide grating (AWG) based on Fourier optics and apply the model to the design of a flattened passband response. This modeling makes it possible to systematically analyze spectral performance and to clarify the physical mechanisms of the multi-input AWG. The model suggested that the width of an input/output mode-field function and the number of waveguides in the array are important factors to flatten the response. We also developed a model for a novel AWG employing cascaded Mach-Zehnder interferometers connected to the AWG input ports and numerically analyzed its optical performance to achieve low-loss, low-crosstalk, and flat-passband response. We demonstrated the usability of this model through investigations of filter performance. We also compared the filter spectrum given by this model with that given by simulation using the beam propagation method",
"title": ""
},
{
"docid": "9065f203a7efd45d2b928f3fd6be3876",
"text": "•An interaction between warfarin and cannabidiol is described•The mechanisms of cannabidiol and warfarin metabolism are reviewed•Mechanism of the interaction is proposed•INR should be monitored in patients when cannabinoids are introduced.",
"title": ""
},
{
"docid": "7da83f5d7bc383e5a2b791a2d45e6422",
"text": "Generating logical form equivalents of human language is a fresh way to employ neural architectures where long shortterm memory effectively captures dependencies in both encoder and decoder units. The logical form of the sequence usually preserves information from the natural language side in the form of similar tokens, and recently a copying mechanism has been proposed which increases the probability of outputting tokens from the source input through decoding. In this paper we propose a caching mechanism as a more general form of the copying mechanism which also weighs all the words from the source vocabulary according to their relation to the current decoding context. Our results confirm that the proposed method achieves improvements in sequence/token-level accuracy on sequence to logical form tasks. Further experiments on cross-domain adversarial attacks show substantial improvements when using the most influential examples of other domains for training.",
"title": ""
},
{
"docid": "b8700283c7fb65ba2e814adffdbd84f8",
"text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.",
"title": ""
},
{
"docid": "98a700116ba846945927a0dd8e27586b",
"text": "Automatic recording of user behavior within a system (instrumentation) to develop and test theories has a rich history in psychology and system design. Often, researchers analyze instrumented behavior in isolation from other data. The problem with collecting instrumented behaviors without attitudinal, demographic, and contextual data is that researchers have no way to answer the 'why' behind the 'what'. We have combined the collection and analysis of behavioral instrumentation with other HCI methods to develop a system for Tracking Real-Time User Experience (TRUE). Using two case studies as examples, we demonstrate how we have evolved instrumentation methodology and analysis to extensively improve the design of video games. It is our hope that TRUE is adopted and adapted by the broader HCI community, becoming a useful tool for gaining deep insights into user behavior and improvement of design for other complex systems.",
"title": ""
},
{
"docid": "a9a3d46bd6f5df951957ddc57d3d390d",
"text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.",
"title": ""
},
{
"docid": "24fc1997724932c6ddc3311a529d7505",
"text": "In these days securing a network is an important issue. Many techniques are provided to secure network. Cryptographic is a technique of transforming a message into such form which is unreadable, and then retransforming that message back to its original form. Cryptography works in two techniques: symmetric key also known as secret-key cryptography algorithms and asymmetric key also known as public-key cryptography algorithms. In this paper we are reviewing different symmetric and asymmetric algorithms.",
"title": ""
},
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
},
{
"docid": "a8920f6ba4500587cf2a160b8d91331a",
"text": "In this paper, we present an approach that can handle Z-numbers in the context of multi-criteria decision-making problems. The concept of Z-number as an ordered pair Z=(A, B) of fuzzy numbers A and B is used, where A is a linguistic value of a variable of interest and B is a linguistic value of the probability measure of A. As human beings, we communicate with each other by means of natural language using sentences like “the journey from home to university most likely takes about half an hour.” The Z-numbers are converted to fuzzy numbers. Then the Z-TODIM and Z-TOPSIS are presented as a direct extension of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are applied to two case studies and compared with the standard approach using crisp values. The results obtained show the feasibility of the approach.",
"title": ""
},
{
"docid": "6c490d5a320af4ebf453fd00444aa96f",
"text": "This paper describes the use of a convolutional neural network to perform address block location on machine-printed mail pieces. Locating the address block is a di cult object recognition problem because there is often a large amount of extraneous printing on a mail piece and because address blocks vary dramatically in size and shape. We used a convolutional locator network with four outputs, each trained to nd a di erent corner of the address block. A simple set of rules was used to generate ABL candidates from the network output. The system performs very well: when allowed ve guesses, the network will tightly bound the address delivery information in 98.2% of the cases.",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "bc285d5113ce64686f324114ffb8e88e",
"text": "A thorough review concerning palm uses in tropical rainforests of north-western South America was carried out to understand patterns of palm use throughout ecoregions (Amazonia, Andes, Chocó), countries (Colombia, Ecuador, Peru, Bolivia), and among the different human groups (indigenous, mestizos, afroamericans, colonos) that occur there. A total of 194 useful palm species, 2,395 different uses and 6,141 use-reports were recorded from 255 references. The Amazon had the highest palm use, whereas fewer, but similar uses were recorded for the Andes and Chocó. Ecuador was the most intensively studied country. Most palms were used for human food, utensils and tools, construction, and cultural purposes. Indigenous people knew more palm uses than mestizos, afroamericans and colonos. The use of palms was not random and the main uses were the same throughout the studied ecoregions and countries. Palms satisfy basic subsistence needs and have great importance in traditional cultures of rural indigenous and peasant populations in our study area. Arecaceae is probably the most important plant family in the Neotropics, in relation to use diversity and abundance. Se realizó una revisión exhaustiva de los usos de las palmeras en los bosques tropicales lluviosos del noroeste de América del Sur para comprender los patrones de uso de las palmeras por ecorregiones (Amazonia, Andes, Chocó), países (Colombia, Ecuador, Perú, Bolivia) y entre los diferentes grupos humanos (indígenas, mestizos, afroamericanos, colonos) existentes. Se registraron 194 especies de palmeras útiles, 2,395 usos distintos y 6,141 registros de uso a partir de 255 referencias. La Amazonia tuvo el uso más alto de palmeras, mientras que en los Andes y el Chocó se encontraron menores usos aunque similares. Ecuador fue el país que se estudió más intensamente. La mayoría de las especies se usaron para alimentación humana, utensilios y herramientas, construcción y usos culturales. Los indígenas conocieron más usos de palmeras que los mestizos, afroamericanos y colonos. El uso de las palmeras no fue al azar y los usos principales fueron los mismos en todas las ecorregiones y países estudiados. Las palmeras cubren necesidades básicas de subsistencia y tienen una gran importancia en las culturas tradicionales de las poblaciones indígenas y campesinas rurales en nuestra área de estudio. Arecaceae es probablemente la familia de plantas más importante del Neotrópico, en relación a su diversidad y abundancia de usos.",
"title": ""
}
] |
scidocsrr
|
76a6865f47eb027c3f0399e2c49e81d0
|
Lifecycle Management in the Smart City Context: Smart Parking Use-Case
|
[
{
"docid": "a36d019f5016d0e86ac8d7c412a3c9fd",
"text": "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.",
"title": ""
}
] |
[
{
"docid": "a6c9ff64c9c007e71192eb7023c8617f",
"text": "Elderly individuals can access online 3D virtual stores from their homes to make purchases. However, most virtual environments (VEs) often elicit physical responses to certain types of movements in the VEs. Some users exhibit symptoms that parallel those of classical motion sickness, called cybersickness, both during and after the VE experience. This study investigated the factors that contribute to cybersickness among the elderly when immersed in a 3D virtual store. The results of the first experiment show that the simulator sickness questionnaire (SSQ) scores increased significantly by the reasons of navigational rotating speed and duration of exposure. Based on these results, a warning system with fuzzy control for combating cybersickness was developed. The results of the second and third experiments show that the proposed system can efficiently determine the level of cybersickness based on the fuzzy sets analysis of operating signals from scene rotating speed and exposure duration, and subsequently combat cybersickness. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "deed7aab7e678b9474e11e05ebfefc04",
"text": "Ultrathin films of single-walled carbon nanotubes (SWNTs) represent an attractive, emerging class of material, with properties that can approach the exceptional electrical, mechanical, and optical characteristics of individual SWNTs, in a format that, unlike isolated tubes, is readily suitable for scalable integration into devices. These features suggest the potential for realistic applications as conducting or semiconducting layers in diverse types of electronic, optoelectronic and sensor systems. This article reviews recent advances in assembly techniques for forming such films, modeling and experimental work that reveals their collective properties, and engineering aspects of implementation in sensors and in electronic devices and circuits with various levels of complexity. A concluding discussion provides some perspectives on possibilities for future work in fundamental and applied aspects.",
"title": ""
},
{
"docid": "abdffec5ea2b05b61006cc7b6b295976",
"text": "Making recommendation requires predicting what is of interest to a user at a specific time. Even the same user may have different desires at different times. It is important to extract the aggregate interest of a user from his or her navigational path through the site in a session. This paper concentrates on the discovery and modelling of the user’s aggregate interest in a session. This approach relies on the premise that the visiting time of a page is an indicator of the user’s interest in that page. The proportion of times spent in a set of pages requested by the user within a single session forms the aggregate interest of that user in that session. We first partition user sessions into clusters such that only sessions which represent similar aggregate interest of users are placed in the same cluster. We employ a model-based clustering approach and partition user sessions according to similar amount of time in similar pages. In particular, we cluster sessions by learning a mixture of Poisson models using Expectation Maximization algorithm. The resulting clusters are then used to recommend pages to a user that are most likely contain the information which is of interest to that user at that time. Although the approach does not use the sequential patterns of transactions, experimental evaluation shows that the approach is quite effective in capturing a Web user’s access pattern. The model has an advantage over previous proposals in terms of speed and memory usage.",
"title": ""
},
{
"docid": "8ddf6f978cfa3e4352c607a8e4d6d66a",
"text": "Due to the ability of encoding and mapping semantic information into a highdimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable.",
"title": ""
},
{
"docid": "b0103474ecd369a9f0ba637c34bacc56",
"text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.",
"title": ""
},
{
"docid": "42325b507cb2529187a870e30ab727f2",
"text": "Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embed-ding models.",
"title": ""
},
{
"docid": "aca8b1efb729bdc45f5363cb663dba74",
"text": "Along with the burst of open source projects, software theft (or plagiarism) has become a very serious threat to the healthiness of software industry. Software birthmark, which represents the unique characteristics of a program, can be used for software theft detection. We propose a system call dependence graph based software birthmark called SCDG birthmark, and examine how well it reflects unique behavioral characteristics of a program. To our knowledge, our detection system based on SCDG birthmark is the first one that is capable of detecting software component theft where only partial code is stolen. We demonstrate the strength of our birthmark against various evasion techniques, including those based on different compilers and different compiler optimization levels as well as two state-of-the-art obfuscation tools. Unlike the existing work that were evaluated through small or toy software, we also evaluate our birthmark on a set of large software. Our results show that SCDG birthmark is very practical and effective in detecting software theft that even adopts advanced evasion techniques.",
"title": ""
},
{
"docid": "83baf6ee5aa8c715212d9d08a349cceb",
"text": "In this paper presented a CPW-fed monopole antenna,which design for WiFi and 4G (LTE) utilizations. The proposed antenna contains a U- slot in the circular patch with gradually varied ground to improve the impedance match, and it occupies a compact size of 42*28.38*1.5mm3. Through simulations and measurements, the proposed antenna with dual-band operation is suitable to for 4G LTE and WiFi utilizations in the 2.28GHz to 2.82GHz band and 3.87GHz to 6.0GHz band.",
"title": ""
},
{
"docid": "b2589260e4e8d26df598bb873646b7ec",
"text": "In this paper, the performance of a topological-metric visual-path-following framework is investigated in different environments. The framework relies on a monocular camera as the only sensing modality. The path is represented as a series of reference images such that each neighboring pair contains a number of common landmarks. Local 3-D geometries are reconstructed between the neighboring reference images to achieve fast feature prediction. This condition allows recovery from tracking failures. During navigation, the robot is controlled using image-based visual servoing. The focus of this paper is on the results from a number of experiments that were conducted in different environments, lighting conditions, and seasons. The experiments with a robot car show that the framework is robust to moving objects and moderate illumination changes. It is also shown that the system is capable of online path learning.",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "9b70a12243bdd0aaece4268dd32935b1",
"text": "PURPOSE\nOvertraining is primarily related to sustained high load training, often coupled with other stressors. Studies in animal models have suggested that unremittingly heavy training (monotonous training) may increase the likelihood of developing overtraining syndrome. The purpose of this study was to extend our preliminary observations by relating the incidence of illnesses and minor injuries to various indices of training.\n\n\nMETHODS\nWe report observations of the relationship of banal illnesses (a frequently cited marker of overtraining syndrome) to training load and training monotony in experienced athletes (N = 25). Athletes recorded their training using a method that integrates the exercise session RPE and the duration of the training session. Illnesses were noted and correlated with indices of training load (rolling 6 wk average), monotony (daily mean/standard deviation), and strain (load x monotony).\n\n\nRESULTS\nIt was observed that a high percentage of illnesses could be accounted for when individual athletes exceeded individually identifiable training thresholds, mostly related to the strain of training.\n\n\nCONCLUSIONS\nThese suggest that simple methods of monitoring the characteristics of training may allow the athlete to achieve the goals of training while minimizing undesired training outcomes.",
"title": ""
},
{
"docid": "f786fe8bd38c4af2541f162c569dbc23",
"text": "Increasingly, stakeholders are asking or requiring organizations to be more environmentally responsible with respect to their products and processes; reasons include regulatory requirements, product stewardship, public image, and potential competitive advantages. This paper presents an exploratory study of the relationships between specific environmentally sustainable manufacturing practices, and specific competitive outcomes in an environmentally important but under-researched industry, the U.S. commercial carpet industry. In general, empirical research on the impact of environmental practices on organizational outcomes is inconclusive, partly due to limitations of earlier studies. This paper addresses some of these limitations, and surveys the entire U.S. commercial carpet industry; respondents represent 84 of the market. Findings suggest that environmentally sustainable manufacturing practices may be positively associated with competitive outcomes. In particular, different types of environmentally sustainable manufacturing practices (e.g., pollution prevention, product stewardship) are associated with different competitive outcomes (e.g., manufacturing cost, product quality). These specific findings can be helpful to engineering and operations managers as they respond to environmental and competitive demands.",
"title": ""
},
{
"docid": "cf0d5d3877bf26822c2196a3a17bd073",
"text": "The purpose of this paper is to review existing sensor and sensor network ontologies to understand whether they can be reused as a basis for a manufacturing perception sensor ontology, or if the existing ontologies hold lessons for the development of a new ontology. We develop an initial set of requirements that should apply to a manufacturing perception sensor ontology. These initial requirements are used in reviewing selected existing sensor ontologies. This paper describes the steps for 1) extending and refining the requirements; 2) proposing hierarchical structures for verifying the purposes of the ontology; and 3) choosing appropriate tools and languages to support such an ontology. Some languages could include OWL (Web Ontology Language) [1] and SensorML (Sensor Markup Language) [2]. This work will be proposed as a standard within the IEEE Robotics and Automation Society (RAS) Ontologies for Robotics Automation (ORA) Working Group [3]. 1. Overview of Sensor Ontology Effort Next generation robotic systems for manufacturing must perform highly complex tasks in dynamic environments. To improve return on investment, manufacturing robots and automation must become more flexible and adaptable, and less dependent on blind, repetitive motions in a structured, fixed environment. To become more adaptable, robots need both precise sensing for parts and assemblies, so they can focus on specific tasks in which they must interact with and manipulate objects; and situational awareness, so they can robustly sense their entire environment for long-term planning and short-term safety. Meeting these requirements will need advances in sensing and perception systems that can identify and locate objects, can detect people and obstacles, and, in general, can perceive as many elements of the manufacturing environment as needed for operation. To robustly and accurately perceive many elements of the environment will require a wide range of collaborating smart sensors such as cameras, laser scanners, stereo cameras, and others. In many cases these sensors will need to be integrated into a distributed sensor network that offers extensive coverage of a manufacturing facility by sensors of complementary capabilities. To support the development of these sensors and networks, the National Institute of Standards and Technology (NIST) manufacturing perception sensor ontology effort looks to create an ontology of sensors, sensor networks, sensor capabilities, environmental objects, and environmental conditions so as to better define and anticipate the wide range of perception systems needed. The ontology will include:",
"title": ""
},
{
"docid": "8c9c9ad5e3d19b56a096e519cc6e3053",
"text": "Cebocephaly and sirenomelia are uncommon birth defects. Their association is extremely rare; however, the presence of spina bifida with both conditions is not unexpected. We report on a female still-birth with cebocephaly, alobar holoprosencephaly, cleft palate, lumbar spina bifida, sirenomelia, a single umbilical artery, and a 46,XX karyotype, but without maternal diabetes mellitus. Our case adds to the examples of overlapping cephalic and caudal defects, possibly related to vulnerability of the midline developmental field or axial mesodermal dysplasia spectrum.",
"title": ""
},
{
"docid": "e500cd3df03ff2d01d27bc012e332b3a",
"text": "Received Nov 13,2012 Revised Jan 05, 2013 Accepted Jan 12,2013 In this paper, we have proposed a framework to count the moving person in the video automatically in a very dense crowd situation. Median filter is used to segment the foreground from the background and blob analysis is done to count the people in the current frame. Optimization of different parameters is done by using genetic algorithm. This framework is used to count the people in the video recorded in the mattaf area where different crowd densities can be observed. An overall people counting accuracy of more than 96% is obtained. Keyword:",
"title": ""
},
{
"docid": "32670b62c6f6e7fa698e00f7cf359996",
"text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.",
"title": ""
},
{
"docid": "704d068f791a8911068671cb3dca7d55",
"text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.",
"title": ""
},
{
"docid": "955882547c8d7d455f3d0a6c2bccd2b4",
"text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.",
"title": ""
},
{
"docid": "a68ccab91995603b3dbb54e014e79091",
"text": "Qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. However, some promise for dealing with such systems is offered by research in simulation methodology. Such research produces models that combine both continuous and discrete-event formalisms. Nevertheless, the aims and approaches of the AI and the simulation communities remain rather mutually ill understood. Consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. This article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. The formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning.",
"title": ""
}
] |
scidocsrr
|
f40308d0c7bc2e34edebcbef8b5dedad
|
Predicting Time Series with Space-Time Convolutional and Recurrent Neural Networks
|
[
{
"docid": "15d932b1344d48f13dfbb5e7625b22ad",
"text": "Predictive modeling of human or humanoid movement becomes increasingly complex as the dimensionality of those movements grows. Dynamic Movement Primitives (DMP) have been shown to be a powerful method of representing such movements, but do not generalize well when used in configuration or task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AE-DMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalize. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit. To further improve the model for multiple movements, sparsity is added for the feature layer neurons; therefore, various movements can be observed clearly in the feature space. After training, the model finds a single hidden neuron from the sparsity that can efficiently generate new movements. Our experiments clearly demonstrate the efficiency of missing data imputation using 50-dimensional human movement data.",
"title": ""
},
{
"docid": "2febfc549459450164bfa89f0a6ca964",
"text": "This paper discusses the effectiveness of deep auto-encoder neural networks in visual reinforcement learning (RL) tasks. We propose a framework for combining the training of deep auto-encoders (for learning compact feature spaces) with recently-proposed batch-mode RL algorithms (for learning policies). An emphasis is put on the data-efficiency of this combination and on studying the properties of the feature spaces automatically constructed by the deep auto-encoders. These feature spaces are empirically shown to adequately resemble existing similarities and spatial relations between observations and allow to learn useful policies. We propose several methods for improving the topology of the feature spaces making use of task-dependent information. Finally, we present first results on successfully learning good control policies directly on synthesized and real images.",
"title": ""
},
{
"docid": "f148ed07ef31d81eee08fd0f5a6b6ea8",
"text": "Cyber-physical systems often consist of entities that interact with each other over time. Meanwhile, as part of the continued digitization of industrial processes, various sensor technologies are deployed that enable us to record time-varying attributes (a.k.a., time series) of such entities, thus producing correlated time series. To enable accurate forecasting on such correlated time series, this paper proposes two models that combine convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first model employs a CNN on each individual time series, combines the convoluted features, and then applies an RNN on top of the convoluted features in the end to enable forecasting. The second model adds additional auto-encoders into the individual CNNs, making the second model a multi-task learning model, which provides accurate and robust forecasting. Experiments on a large real-world correlated time series data set suggest that the proposed two models are effective and outperform baselines in most settings.",
"title": ""
}
] |
[
{
"docid": "1719ad98795f32a55f4e920e075ee798",
"text": "BACKGROUND\nUrinary tract infections (UTIs) are one of main health problems caused by many microorganisms, including uropathogenic Escherichia coli (UPEC). UPEC strains are the most frequent pathogens responsible for 85% and 50% of community and hospital acquired UTIs, respectively. UPEC strains have special virulence factors, including type 1 fimbriae, which can result in worsening of UTIs.\n\n\nOBJECTIVES\nThis study was performed to detect type 1 fimbriae (the FimH gene) among UPEC strains by molecular method.\n\n\nMATERIALS AND METHODS\nA total of 140 isolated E. coli strains from patients with UTI were identified using biochemical tests and then evaluated for the FimH gene by polymerase chain reaction (PCR) analysis.\n\n\nRESULTS\nThe UPEC isolates were identified using biochemical tests and were screened by PCR. The fimH gene was amplified using specific primers and showed a band about 164 bp. The FimH gene was found in 130 isolates (92.8%) of the UPEC strains. Of 130 isolates positive for the FimH gene, 62 (47.7%) and 68 (52.3%) belonged to hospitalized patients and outpatients, respectively.\n\n\nCONCLUSIONS\nThe results of this study indicated that more than 90% of E. coli isolates harbored the FimH gene. The high binding ability of FimH could result in the increased pathogenicity of E. coli; thus, FimH could be used as a possible diagnostic marker and/or vaccine candidate.",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "bc4b545faba28a81202e3660c32c7ec2",
"text": "This paper describes a novel two-stage fully-differential CMOS amplifier comprising two self-biased inverter stages, with optimum compensation and high efficiency. Although it relies on a class A topology, it is shown through simulations, that it achieves the highest efficiency of its class and comparable to the best class AB amplifiers. Due to the self-biasing, a low variability in the DC gain over process, temperature, and supply is achieved. A detailed circuit analysis, a design methodology for optimization and the most relevant simulation results are presented, together with a final comparison among state-of-the-art amplifiers.",
"title": ""
},
{
"docid": "0d0d17a820bee23b7b6bcf804d4457dc",
"text": "The proponents of graphical programming (that is using graphics to program a computer, not programming a computer to do graphics) claim graphical programming is better than text-based programming; however text-based programmers far out number graphics-based programmers. This paper describes the preliminary developments of comparing the use of LabVIEW (a graphical programming language) to MATLAB (a text-based language) in teaching discrete-time signal processing (DSP). This paper presents the results of using both methods in a junior-level introduction to DSP class. The students who enter this class have had a course in continuous-time signals and systems but no DSP theory background. Several quarters of concept inventory data have been collected on the MATLAB version of the class. The same inventory was used with the LabVIEW version of the class and the results compared",
"title": ""
},
{
"docid": "5b51fb07c0c8c9317ee2c81c54ba4c60",
"text": "Aim The aim of this paper is to explore the role of values-based service for sustainable business. The two basic questions addressed are: What is ‘values-based service’? How can values create value for customers and other stakeholders? Design/ methodology/ approach This paper is based on extensive empirical studies focusing on the role of values at the corporate, country and store levels in the retail company IKEA and a comparison of the results with data from Starbucks, H&M and Body Shop. The theoretical point of departure is a business model based on the service-dominant logic (SDL) on the one hand and control through values focusing on social and environmental values forming the basis for a sustainable business. Findings Based on a comparative, inductive empirical analysis, five principles for a sustainable values-based service business were identified: (1) Strong company values drive customer value, (2) CSR as a strategy for sustainable service business, (3) Values-based service experience for co-creating value with customers, (4) Values-based service brand and communication for values resonance and (5) Values-based service leadership for living the values. A company built on an entrepreneurial business model often has the original entrepreneur’s values and leadership style as a model for future generations of leaders. However, the challenge for subsequent leaders is to develop these values and communicate what they mean today. Orginality/ value We suggest a new framework for managing values-based service to create a sustainable business based on values resonance.",
"title": ""
},
{
"docid": "ecbdbb838bd183ec7a695a2999d8d157",
"text": "This paper presents three different concepts of Ku-band low profile antennas for mobile satcom. First, a low profile fully active phased array aiming to Ku-Band broadcast reception is presented. Two other hybrid phased arrays are presented, one aiming to receive only applications and one for Tx/Rx operations. All the three presented antennas are low profile and suitable for in-vehicle integration.",
"title": ""
},
{
"docid": "272d83db41293889d9ca790717983193",
"text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.",
"title": ""
},
{
"docid": "8c232cd0cea7714dde71669024d3d811",
"text": "This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored. Moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. In most settings, the new algorithms proposed clearly outperform the existing ones.",
"title": ""
},
{
"docid": "e9b942c71646f2907de65c2641329a66",
"text": "In many vision based application identifying moving objects is important and critical task. For different computer vision application Background subtraction is fast way to detect moving object. Background subtraction separates the foreground from background. However, background subtraction is unable to remove shadow from foreground. Moving cast shadow associated with moving object also gets detected making it challenge for video surveillance. The shadow makes it difficult to detect the exact shape of object and to recognize the object.",
"title": ""
},
{
"docid": "06c8d56ecc9e92b106de01ad22c5a125",
"text": "Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
},
{
"docid": "683bf5f20e2102903f569195c806d78c",
"text": "A recent survey among developers revealed that half plan to use HTML5 for mobile apps in the future. An earlier survey showed that access to native device APIs is the biggest shortcoming of HTML5 compared to native apps. Several different approaches exist to overcome this limitation, among them cross-compilation and packaging the HTML5 as a native app. In this paper we propose a novel approach by using a device-local service that runs on the smartphone and that acts as a gateway to the native layer for HTML5-based apps running inside the standard browser. WebSockets are used for bi-directional communication between the web apps and the device-local service. The service approach is a generalization of the packaging solution. In this paper we describe our approach and compare it with other popular ways to grant web apps access to the native API layer of the operating system.",
"title": ""
},
{
"docid": "f34b41a7f0dd902119197550b9bcf111",
"text": "Tachyzoites, bradyzoites (in tissue cysts), and sporozoites (in oocysts) are the three infectious stages of Toxoplasma gondii. The prepatent period (time to shedding of oocysts after primary infection) varies with the stage of T. gondii ingested by the cat. The prepatent period (pp) after ingesting bradyzoites is short (3-10 days) while it is long (18 days or longer) after ingesting oocysts or tachyzoites, irrespective of the dose. The conversion of bradyzoites to tachyzoites and tachyzoites to bradyzoites is biologically important in the life cycle of T. gondii. In the present paper, the pp was used to study in vivo conversion of tachyzoites to bradyzoites using two isolates, VEG and TgCkAr23. T. gondii organisms were obtained from the peritoneal exudates (pex) of mice inoculated intraperitoneally (i.p.) with these isolates and administered to cats orally by pouring in the mouth or by a stomach tube. In total, 94 of 151 cats shed oocysts after ingesting pex. The pp after ingesting pex was short (5-10 days) in 50 cats, intermediate (11-17) in 30 cats, and long (18 or higher) in 14 cats. The strain of T. gondii (VEG, TgCKAr23) or the stage (bradyzoite, tachyzoite, and sporozoite) used to initiate infection in mice did not affect the results. In addition, six of eight cats fed mice infected 1-4 days earlier shed oocysts with a short pp; the mice had been inoculated i.p. with bradyzoites of the VEG strain and their whole carcasses were fed to cats 1, 2, 3, or 4 days post-infection. Results indicate that bradyzoites may be formed in the peritoneal cavities of mice inoculated intraperitoneally with T. gondii and some bradyzoites might give rise directly to bradyzoites without converting to tachyzoites.",
"title": ""
},
{
"docid": "19eec7fce652aa9d8299c71b0a8e8bec",
"text": "To the Editor: Health care professionals have expressed concerns about the quality and veracity of information individuals receive from Internet-based sources. One area of controversy is the use of Internet sites to communicate information on immunization. YouTube is a video-sharing Internet Web site created in 2005 that provides free video streaming. It allows users to share multimedia clips that contain information related to the risks and benefits of immunization. To our knowledge, no studies have examined the content of these videos. We conducted a descriptive study to characterize the available information about immunization on YouTube. Methods. On February 20, 2007, we searched YouTube (www.youtube.com) using the keywords vaccination and immunization. We included all unique videos with Englishlanguage content that contained any message about human immunization. We extracted information on the type of video, clip length, and scientific claims made by the video. We measured the users’ interaction with these videos using view counts and the viewer reviews indicated by the starrating system from 1 star (poor) to 5 stars (awesome). Videos were categorized as negative if the main message of the video portrayed immunization negatively (eg, emphasized the risk of immunization, advocated against immunizing, promoted distrust in vaccine science, made allegations of conspiracy or collusion between supporters of vaccination and manufacturers). Videos were categorized as positive if the central message supported immunization, portraying it positively (eg, described the benefits and safety of immunizing, described immunization as a social good, or encouraged people to receive immunizations). Positive videos were labeled as public service announcements if they were made by governmental agencies or nongovernmental organizations to provide information about immunization as a service to the public. Videos were categorized as ambiguous if they either contained a debate or were ambivalent (ie, a beneficial or social good was countered by negative experiences such as anxious parents and crying infants). The scientific claims made by the videos were classified as substantiated or unsubstantiated/contradicts using as a reference standard the 2006 Canadian Immunization Guide and its human papillomavirus (HPV) vaccine and thimerosal updates, which were the most current guidelines available at the time of the search. All videos were analyzed independently by 2 researchers (J.K. and V.P.G.) and disagreements were resolved by an arbitrator (K.W.). Results. We identified and analyzed 153 videos. The weighted statistic for agreement on classification of videos was 0.93. Seventy-three (48%) of the videos were positive, 49 (32%) were negative, and 31 (20%) were ambiguous (TABLE 1). Compared with positive videos, negative videos were more likely to receive a rating, and they had a higher mean star rating and more views. Among the positive videos, public service announcements received the lowest mean (SD) ratings (2.6 [1.6] stars) and the fewest views (median, 213; interquartile range, 114-409). The most commonly discussed vaccine topic was general childhood vaccines (38 videos [25% of the total]). The most commonly discussed specific vaccine was the HPV vaccine (36 videos [24% of the total]); 20 of these were positive, 4 of which were industrysponsored. Of the HPV vaccine-related videos, 24 specifically referred to Merck or Gardasil. Of the negative videos, 22 (45%) conveyed messages that contradicted the reference standard. None of the positive videos made scientific statements that contradicted the reference standard. TABLE 2 lists the 5 most frequent topics and the scientific claims made. Comment. Approximately half of the videos posted were not explicitly supportive of immunization, and information in negative videos often contradicted the reference standard. The video ratings and view counts suggest the presence of a community of YouTube users critical of immunization. Clinicians therefore need to be",
"title": ""
},
{
"docid": "9dea3143e6ceac6acbb909e302744ba3",
"text": "Biometric identification has numerous advantages over conventional ID and password systems; however, the lack of anonymity and revocability of biometric templates is of concern. Several methods have been proposed to address these problems. Many of the approaches require a precise registration before matching in the anonymous domain. We introduce binary string representations of fingerprints that obviates the need for registration and can be directly matched. We describe several techniques for creating anonymous and revocable representations using these binary string representations. The match performance of these representations is evaluated using a large database of fingerprint images. We prove that given an anonymous representation, it is computationally infeasible to invert it to the original fingerprint, thereby preserving privacy. To the best of our knowledge, this is the first linear, anonymous and revocable fingerprint representation that is implicitly registered.",
"title": ""
},
{
"docid": "8724a0d439736a419835c1527f01fe43",
"text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.",
"title": ""
},
{
"docid": "1ab4f605d67dabd3b2815a39b6123aa4",
"text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.",
"title": ""
},
{
"docid": "88d2fd675e5d0a53ff0834505a438164",
"text": "BACKGROUND\nMany healthcare organizations have implemented adverse event reporting systems in the hope of learning from experience to prevent adverse events and medical errors. However, a number of these applications have failed or not been implemented as predicted.\n\n\nOBJECTIVE\nThis study presents an extended technology acceptance model that integrates variables connoting trust and management support into the model to investigate what determines acceptance of adverse event reporting systems by healthcare professionals.\n\n\nMETHOD\nThe proposed model was empirically tested using data collected from a survey in the hospital environment. A confirmatory factor analysis was performed to examine the reliability and validity of the measurement model, and a structural equation modeling technique was used to evaluate the causal model.\n\n\nRESULTS\nThe results indicated that perceived usefulness, perceived ease of use, subjective norm, and trust had a significant effect on a professional's intention to use an adverse event reporting system. Among them, subjective norm had the most contribution (total effect). Perceived ease of use and subjective norm also had a direct effect on perceived usefulness and trust, respectively. Management support had a direct effect on perceived usefulness, perceived ease of use, and subjective norm.\n\n\nCONCLUSION\nThe proposed model provides a means to understand what factors determine the behavioral intention of healthcare professionals to use an adverse event reporting system and how this may affect future use. In addition, understanding the factors contributing to behavioral intent may potentially be used in advance of system development to predict reporting systems acceptance.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
}
] |
scidocsrr
|
ffb020f58ec3e8d1fd6a13fae48d5a20
|
Study on the Analysis and Optimization of Brake Disc : A Review
|
[
{
"docid": "a2de168151e28347875f2563b9043659",
"text": "In automotive engineering, the safety aspect has been considered as a number one priority in development of a new vehicle. Each single system has been studied and developed in order to meet safety requirements. Instead of having air bags, good suspension systems, good handling and safe cornering, one of the most critical systems in a vehicle is the brake system. The objective of this work is to investigate and analyze the temperature distribution of rotor disc during braking operation using ANSYS Multiphysics. The work uses the finite element analysis techniques to predict the temperature distribution on the full and ventilated brake discs and to identify the critical temperature of the rotor. The analysis also gives us the heat flux distribution for the two discs. c © 2014 University of West Bohemia. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "8f108067656cfa9524374012f9826e62",
"text": "3GPP LTE specified for 5G the support for Device-to-Device (D2D) communication in either supervised mode (controlled by the network) or unsupervised mode (independent from network). This article explores the potential of LTE D2D in a fully unsupervised mode for the broadcast of safety-of-life automotive messages. After an overview of the Proximity Service (ProSe) architecture and new D2D interfaces, it introduces a framework and the required mechanisms for unsupervised LTE D2D broadcast on the new SideLink (SL) interface, composed of (i) a multi-cell and panoperator resource reservation schema (ii) a distributed resource allocation mechanism (iii) decentralized channel congestion control for joint transmit power/scheduling optimization. The proposed scheme is first evaluated independently, then benchmarked against IEEE 802.11p. Complementary to IEEE 802.11p, unsupervised LTE D2D is an opportunity to provide redundancy for ultra-reliable broadcast of automotive safety-of-life messages.",
"title": ""
},
{
"docid": "95c666f41a0b5b0027ad3714f25e5ac2",
"text": "mlpy is a Python Open Source Machine Learning library built on top of NumPy/SciPy and the GNU Scientific Libraries. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3 at the website http://mlpy.fbk.eu.",
"title": ""
},
{
"docid": "55f356248f8ebf2174636cbedeaceaf3",
"text": "In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets.",
"title": ""
},
{
"docid": "f9638a9c1963d531df55563c02edf850",
"text": "We predict mortgage default by applying convolutional neural networks to consumer transaction data. For each consumer we have the balances of the checking account, savings account, and the credit card, in addition to the daily number of transactions on the checking account, and amount transferred into the checking account. With no other information about each consumer we are able to achieve a ROC AUC of 0.918 for the networks, and 0.926 for the networks in combination with a random forests classifier.",
"title": ""
},
{
"docid": "f38ad855c66a43529d268b81c9ea4c69",
"text": "In the recent years, countless security concerns related to automotive systems were revealed either by academic research or real life attacks. While current attention was largely focused on passenger cars, due to their ubiquity, the reported bus-related vulnerabilities are applicable to all industry sectors where the same bus technology is deployed, i.e., the CAN bus. The SAE J1939 specification extends and standardizes the use of CAN to commercial vehicles where security plays an even higher role. In contrast to empirical results that attest such vulnerabilities in commercial vehicles by practical experiments, here, we determine that existing shortcomings in the SAE J1939 specifications open road to several new attacks, e.g., impersonation, denial of service (DoS), distributed DoS, etc. Taking the advantage of an industry-standard CANoe based simulation, we demonstrate attacks with potential safety critical effects that are mounted while still conforming to the SAE J1939 standard specification. We discuss countermeasures and security enhancements by including message authentication mechanisms. Finally, we evaluate and discuss the impact of employing these mechanisms on the overall network communication.",
"title": ""
},
{
"docid": "f98d224546769672b12e54d363eba131",
"text": "We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs: changes to either of the pixels can easily lead to changes in descriptor values and compromise their performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements.",
"title": ""
},
{
"docid": "227aa2478076daccec9291be190f7eed",
"text": "In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"title": ""
},
{
"docid": "c5f05fd620e734506874c8ec9e839535",
"text": "Superficial vein thrombosis is a rare pathology that was first described by Mordor, although his description of phlebitis was observed exclusively at the thoracic wall. In 1955, Braun-Falco described penile thrombosis and later superficial penile vein thrombosis was first reported by Helm and Hodge. Mondor's disease of the penis is a rare entity with a reported incidence of 1.39%. It is described most of the time as a self-limited disease however it causes great morbidity to the patient who suffers from it. The pathogenesis of Mondor's disease is unknown. Its diagnosis is based on clinical signs such as a cordlike induration on the dorsal face of the penis, and imaging studies, doppler ultrasound is the instrument of choice. Treatment is primarily symptomatic but some cases may require surgical management however an accurate diagnostic resolves almost every case. We will describe the symptoms, diagnosis, and treatment of superficial thrombophlebitis of the dorsal vein of the penis.",
"title": ""
},
{
"docid": "8e31b1f0ed3055332136d8161149e9ed",
"text": "Data collection has become easy due to the rapid development of both mobile devices and wireless networks. In each second, numerous data are generated by user devices and collected through wireless networks. These data, carrying user and network related information, are invaluable for network management. However, they were seldom employed to improve network performance in existing research work. In this article we propose a bandwidth allocation algorithm to increase the throughput of cellular network users by exploring user and network data collected from user devices. With the aid of these data, users can be categorized into clusters and share bandwidth to improve the resource utilization of the network. Simulation results indicate that the proposed scheme is able to rationally form clusters among mobile users and thus significantly increase the throughput and bandwidth efficiency of the network.",
"title": ""
},
{
"docid": "0072941488ef0e22b06d402d14cbe1be",
"text": "This chapter is about computational modelling of the process of musical composition, based on a cognitive model of human behaviour. The idea is to try to study not only the requirements for a computer system which is capable of musical composition, but also to relate it to human behaviour during the same process, so that it may, perhaps, work in the same way as a human composer, but also so that it may, more likely, help us understand how human composers work. Pearce et al. (2002) give a fuller discussion of the motivations behind this endeavour.",
"title": ""
},
{
"docid": "0be8f76e02170aec6e017d00c1820ef9",
"text": "Display advertisements on the web are sold via ad exchanges that use real time auction. We describe the challenges of designing a suitable auction, and present a simple auction called the Optional Second Price (OSP) auction that is currently used in Doubleclick Ad Exchange.",
"title": ""
},
{
"docid": "105f34c3fa2d4edbe83d184b7cf039aa",
"text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.",
"title": ""
},
{
"docid": "c8e679ff3a99c2e596756a69d22c54a1",
"text": "Convolutional Neural Networks (CNNs) have been successfully applied to many computer vision tasks, such as image classification. By performing linear combinations and element-wise nonlinear operations, these networks can be thought of as extracting solely first-order information from an input image. In the past, however, second-order statistics computed from handcrafted features, e.g., covariances, have proven highly effective in diverse recognition tasks. In this paper, we introduce a novel class of CNNs that exploit second-order statistics. To this end, we design a series of new layers that (i) extract a covariance matrix from convolutional activations, (ii) compute a parametric, second-order transformation of a matrix, and (iii) perform a parametric vectorization of a matrix. These operations can be assembled to form a Covariance Descriptor Unit (CDU), which replaces the fully-connected layers of standard CNNs. Our experiments demonstrate the benefits of our new architecture, which outperform the first-order CNNs, while relying on up to 90% fewer parameters.",
"title": ""
},
{
"docid": "057b15a12cf42399fcb2c65fc3027b45",
"text": "We report the results of 21 femoral osteotomies performed in 18 patients for genu recurvatum and flattening of the femoral condyles alter poiomyelltis. Before operation the average angle of recurvatum was 31#{176} and all the limbs required bracing. After a mean follow-up of four years there has been partial recurrence in only one case. Nine patients (10 limbs) needed no orthosis and the others had less discomfort and an improved gait. Complete remodelling ofthe femoral and tibial epiphyses was noted in two of the younger patients.",
"title": ""
},
{
"docid": "5769af5ff99595032653dbda724f5a9d",
"text": "JULY 2005, GSA TODAY ABSTRACT The subduction factory processes raw materials such as oceanic sediments and oceanic crust and manufactures magmas and continental crust as products. Aqueous fluids, which are extracted from oceanic raw materials via dehydration reactions during subduction, dissolve particular elements and overprint such elements onto the mantle wedge to generate chemically distinct arc basalt magmas. The production of calc-alkalic andesites typifies magmatism in subduction zones. One of the principal mechanisms of modern-day, calc-alkalic andesite production is thought to be mixing of two endmember magmas, a mantle-derived basaltic magma and an arc crust-derived felsic magma. This process may also have contributed greatly to continental crust formation, as the bulk continental crust possesses compositions similar to calc-alkalic andesites. If so, then the mafic melting residue after extraction of felsic melts should be removed and delaminated from the initial basaltic arc crust in order to form “andesitic” crust compositions. The waste materials from the factory, such as chemically modified oceanic materials and delaminated mafic lower crust materials, are transported down to the deep mantle and recycled as mantle plumes. The subduction factory has played a central role in the evolution of the solid Earth through creating continental crust and deep mantle geochemical reservoirs.",
"title": ""
},
{
"docid": "6142b6b038aa04da5e2bc107639dbfcc",
"text": "The reproductive strategies of the sea urchin, Paracentrotus lividus, was studied in the Bay of Tunis. Samples were collected monthly, from September 1993 to August 1995, in two sites which differ in their marine vegetation and their exposure to wave action. Histological examination demonstrated a cycle of gametogenesis with six reproductive stages and a main breeding period occurring between April and June. Gonad indices varied between sites and years, the sheltered site presenting a higher investment in reproduction. This difference was essentially induced by the largest sea urchins (above 40 mm in diameter). Repletion indices showed a clear pattern without difference between sites and years. The sea urchin increase in feeding activity was controlled by the need to allocate nutrient to the gonad during the mature stage. But the gonad investment was not correlated with the intensity of food intake. Hydrodynamic conditions might play a key role in diverting energy to the maintenance in an exposed environment at the expense of reproduction.",
"title": ""
},
{
"docid": "b9cdda89b24a8595481933e268319e18",
"text": "Wireless hotspots allow users to use Internet via Wi-Fi interface, and many shops, cafés, parks, and airports provide free wireless hotspot services to attract customers. However, there is no authentication mechanism of Wi-Fi access points (APs) available in such hotspots, which makes them vulnerable to evil twin AP attacks. Such attacks are harmful because they allow to steal sensitive data from users. Today, there is no client-side mechanism that can effectively detect an evil twin AP attack without additional infrastructure supports. In this paper, we propose a mechanism CETAD leveraging public servers to detect such attacks. CETAD only requires installing an app at the client device and does not require to change the hotspot APs. CETAD explores the similarities between the legitimate APs and discrepancies between evil twin APs, and legitimate ones to detect an evil twin AP attack. Through our implementation and evaluation, we show that CETAD can detect evil twin AP attacks in various scenarios effectively.",
"title": ""
},
{
"docid": "f22726c7c8aef5fec8f81c76b5a3cb54",
"text": "In this paper, current silicon carbide (SiC) MOSFETs from two different manufacturers are evaluated including static and dynamic characteristics for different gate resistances, different load currents and at various temperatures. These power semiconductors are operated continuously at a high switching frequency of 1MHz comparing a hard- and a soft-switching converter. A calorimetric power loss measurement method is realized in order to achieve a good measurement accuracy, and the results are compared to the electrical measurements.",
"title": ""
},
{
"docid": "e6aae5ba17628d053371e93c495680d4",
"text": "This study investigated the effect on worry of biased attentional engagement and disengagement. Variants of a novel attention modification paradigm were developed, designed to induce a group difference either in participants' tendency to selectively engage with, or disengage from, threatening meanings. An index of threat bias, reflecting relative speeding to process threat word compared to non-threat word content, confirmed that both procedures were effective in inducing differential attentional bias. Importantly, when the induced group difference in attentional bias followed the procedure designed to influence selective engagement with threat meanings, it also gave rise to a corresponding group difference in worry. This was not the case when it was induced by the procedure designed to influence selective disengagement from threat meanings. These findings suggest that facilitated attentional engagement with threat meanings may causally contribute to variability in worry.",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
}
] |
scidocsrr
|
e28db46baa5a74499c8d5bed6970e7ae
|
ObjectNet3D: A Large Scale Database for 3D Object Recognition
|
[
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "b11ee8855429109e1023192ab315be0e",
"text": "In this paper we describe a new general plagiarism detection method, that we used in our winning entry to the 1st International Competition on Plagiarism Detection, the external plagiarism detection task, which assumes the source documents are available. In the first phase of our method, a matrix of kernel values is computed, which gives a similarity value based on n-grams between each source and each suspicious document. In the second phase, each promising pair is further investigated, in order to extract the precise positions and lengths of the subtexts that have been copied and maybe obfuscated – using encoplot, a novel linear time pairwise sequence matching technique. We solved the significant computational challenges arising from having to compare millions of document pairs by using a library developed by our group mainly for use in network security tools. The performance achieved is comparing more than 49 million pairs of documents in 12 hours on a single computer. The results in the challenge were very good, we outperformed all other methods.",
"title": ""
},
{
"docid": "99f57f28f8c262d4234d07deb9dcf49d",
"text": "Historically, conversational systems have focused on goal-directed interaction and this focus defined much of the work in the field of spoken dialog systems. More recently researchers have started to focus on nongoal-oriented dialog systems often referred to as ”chat” systems. We can refer to these as Chat-oriented Dialog (CHAD)systems. CHAD systems are not task-oriented and focus on what can be described as social conversation where the goal is to interact while maintaining an appropriate level of engagement with a human interlocutor. Work to date has identified a number of techniques that can be used to implement working CHADs but it has also highlighted important limitations. This note describes CHAD characteristics and proposes a research agenda.",
"title": ""
},
{
"docid": "65209c3ce517aa7cdcdb3a7106ffe9f2",
"text": "This paper presents first results of the Networking and Cryptography library (NaCl) on the 8-bit AVR family of microcontrollers. We show that NaCl, which has so far been optimized mainly for different desktop and server platforms, is feasible on resource-constrained devices while being very fast and memory efficient. Our implementation shows that encryption using Salsa20 requires 268 cycles/byte, authentication using Poly1305 needs 195 cycles/byte, a Curve25519 scalar multiplication needs 22 791 579 cycles, signing of data using Ed25519 needs 23 216 241 cycles, and verification can be done within 32 634 713 cycles. All implemented primitives provide at least 128-bit security, run in constant time, do not use secret-data-dependent branch conditions, and are open to the public domain (no usage restrictions).",
"title": ""
},
{
"docid": "e651af2be422e13548af7d3152d27539",
"text": "A sample of 116 children (M=6 years 7 months) in Grade 1 was randomly assigned to experimental (n=60) and control (n=56) groups, with equal numbers of boys and girls in each group. The experimental group received a program aimed at improving representation and transformation of visuospatial information, whereas the control group received a substitute program. All children were administered mental rotation tests before and after an intervention program and a Global-Local Processing Strategies test before the intervention. The results revealed that initial gender differences in spatial ability disappeared following treatment in the experimental but not in the control group. Gender differences were moderated by strategies used to process visuospatial information. Intervention and processing strategies were essential in reducing gender differences in spatial abilities.",
"title": ""
},
{
"docid": "79eb0a39106679e80bd1d1edcd100d4d",
"text": "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.",
"title": ""
},
{
"docid": "93f1e6d0e14ce5aa07b32ca6bdf3dee4",
"text": "Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satis ability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, nding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called \\conditioning search\" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.",
"title": ""
},
{
"docid": "650327e72cc22b7d28502fdc526c20b7",
"text": "A novel high voltage edge termination utilizing Surface-Charge-Control (SCC) technology has been proposed for a wide range temperature with humidity. Under electrically biased and high-humidity conditions, charges generated by the high electric field and external perturbations accumulate on the silicon surface, which eventually degrades the blocking capability. The proposed edge termination structure has an optimized semi-insulated layer on the silicon surface. Due to this optimized layer, the accumulated charges on the silicon surface are swept away, which contributes to the improvement of the reliability. HTRB and low-temperature with high-humidity tests were carried out on the fabricated 6.5kV IGBT devices utilizing the SCC technology. The results validate that the proposed SCC technology has improved the long-term stability and the robustness against high humidity conditions.",
"title": ""
},
{
"docid": "5b560e3a836af044bc2fcd2d8fefcd37",
"text": "The influence of brand extensions on the parent brand is important to seen has explored the effects of actual experience with them despite the fact that perceptions based on experience are imunderstand because they may change its core beliefs and thus either enhance or jeopardize its positioning. However, previous research has focussed on portant determinants of product knowledge (Smith and Swinyard, 1983; Isen, 1984; Oliver, 1993). beliefs about brand extensions not beliefs about the parent brand. We exAs an example, Reebok recently introduced hiking shoes. plore the influence of direct experience with a brand extension on consumers’ Before the introduction, consumers had beliefs about Reebok knowledge about parent brands that differ in familiarity. We find differences (“lightweight” and “comfortable”) and hiking shoes (“in earth in beliefs about unfamiliar parent brands between a positive and negative colors” and “heavyweight”). After the brand extension, would experience but no differences in beliefs about familiar parent brands. Simiconsumers link their beliefs about hiking shoes with Reebok? larly, after experience with a brand extension, consumers changed their beWould they change their initial beliefs about Reebok? This liefs about and attitude toward unfamiliar parent brands more so than with is important, as these beliefs underlie Reeboks’ competitive familiar parent brands. We discuss theoretical and managerial implicaadvantage in athletic shoes. Would consumers’ responses tions, limitations, and future research directions. J BUSN RES 2000. 49.47– change if they had a positive or negative experience with the 55. 2000 Elsevier Science Inc. All rights reserved hiking shoes? Finally, if a less familiar athletic shoe brand such as New Balance introduced hiking shoes, would its beliefs change in a similar manner as Reebok’s? These are critical questions for brand managers to understand in order to fully An important reason why companies continue to use assess the implications of a hiking shoe launch. brand extensions aggressively is to create excitement In this research, we explore how brand extensions influence for a mature brand (Aaker, 1991). However, that may knowledge about parent brands. Unlike previous research, we not be an appropriate strategic objective if they influence examine how this influence may change with parent brands consumers’ knowledge about the parent brand, which is the differing in familiarity, and positive and negative experiences brand being extended. This influence is possible because conwith brand extensions. We study familiarity because knowlsumers can have different beliefs about a brand extension than edge change should differ between more and less familiar about the parent brand because they are in different categories parent brands. We develop a theoretical framework and derive that represent new product and usage contexts. It is important hypotheses, which we test in an experiment. After the presento understand because knowledge about a parent brand undertation of the results, we discuss theoretical and managerial lies its core equity (Sujan and Bettman, 1989; Aaker, 1991), implications, and limitations and future research directions. and thus any changes brought about by brand extensions could alter its competitive position. Yet, prior research has focussed on knowledge about the Theoretical Framework brand extension itself (e.g., Aaker and Keller, 1990; Boush Research has focussed on how consumers form beliefs about and Loken, 1991). Surprisingly little effort has examined the and attitudes toward brand extensions not on their influence influence of brand extensions on a parent brand (for excepon parent brand knowledge. What little work exists on the tions, see Loken and Roedder John [1993] and Roedder John, topic is inconclusive. A couple of studies document a negative Loken, and Joiner [1998]). Moreover, no research we have influence (Loken and Roedder John, 1993; Roedder John, Loken, and Joiner, 1998), while others indicate no influence Address correspondence to D. A. Sheinin, University of Maryland at College (Romeo, 1991; Keller and Aaker, 1992). Many questions are Park, Robert H. Smith School of Business, College Park, MD 20742. Phone: 301-405-2173; Fax: 301-405-0146; E-mail: dsheinin@rhsmith.umd.edu unresolved, including how direct experience with brand ex-",
"title": ""
},
{
"docid": "b6ae6ee48c9ddc2c18a194f53917a79e",
"text": "Modern information systems produce tremendous amounts of event data. The area of process mining deals with extracting knowledge from this data. Real-life processes can be effectively discovered, analyzed and optimized with the help of mature process mining techniques. There is a variety of process mining case studies and experience reports from such business areas as healthcare, public, transportation and education. Although nowadays, these techniques are mostly used for discovering business processes.\n The goal of this industrial paper is to show that process mining can be applied to software too. Here we present and analyze our experiences on applying process mining in different productive software systems used in the touristic domain. Process models and user interface workflows underlie the functional specifications of the systems we experiment with. When the systems are utilized, user interaction is recorded in event logs. After applying process mining methods to these logs, process and user interface flow models are automatically derived. These resulting models provide insight regarding the real usage of the software, motivate the changes in the functional specifications, enable usability improvements and software redesign.\n Thus, with the help of our examples we demonstrate that process mining facilitates new forms of software analysis. The user interaction with almost every software system can be mined in order to improve the software and to monitor and measure its real usage.",
"title": ""
},
{
"docid": "c19863ef5fa4979f288763837e887a7c",
"text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.",
"title": ""
},
{
"docid": "e773f21956d78052e5f4caadb6fcf656",
"text": "Event extraction is a difficult information extraction task. Li et al. (2014) explore the benefits of modeling event extraction and two related tasks, entity mention and relation extraction, jointly. This joint system achieves state-of-the-art performance in all tasks. However, as a system operating only at the sentence level, it misses valuable information from other parts of the document. In this paper, we present an incremental approach to make the global context of the entire document available to the intra-sentential, state-of-the-art event extractor. We show that our method robustly increases performance on two datasets, namely ACE 2005 and TAC 2015.",
"title": ""
},
{
"docid": "fe043223b37f99419d9dc2c4d787cfbb",
"text": "We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.",
"title": ""
},
{
"docid": "c05fc37d9f33ec94f4c160b3317dda00",
"text": "We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.",
"title": ""
},
{
"docid": "d3b03d65b61b98db03445bda899b44ba",
"text": "Positioning is basis for providing location information to mobile users, however, with the growth of wireless and mobile communications technologies. Mobile phones are equipped with several radio frequency technologies for driving the positioning information like GSM, Wi-Fi or Bluetooth etc. In this way, the objective of this thesis was to implement an indoor positioning system relying on Bluetooth Received Signal Strength (RSS) technology and it integrates into the Global Positioning Module (GPM) to provide precise information inside the building. In this project, we propose indoor positioning system based on RSS fingerprint and footprint architecture that smart phone users can get their position through the assistance collections of Bluetooth signals, confining RSSs by directions, and filtering burst noises that can overcome the server signal fluctuation problem inside the building. Meanwhile, this scheme can raise more accuracy in finding the position inside the building.",
"title": ""
},
{
"docid": "6222f6b36a094540d1033b77db1efac0",
"text": "Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain classification, comparing deep learning based approaches with conventional ones like conditional random fields. This paper proposes a holistic multi-domain, multi-task (i.e. slot filling, domain and intent detection) modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system, demonstrating the distinctive power of deep learning methods, namely bi-directional recurrent neural network (RNN) with long-short term memory (LSTM) cells (RNN-LSTM) to handle such complexity. The contributions of the presented work are three-fold: (i) we propose an RNN-LSTM architecture for joint modeling of slot filling, intent determination, and domain classification; (ii) we build a joint multi-domain model enabling multi-task deep learning where the data from each domain reinforces each other; (iii) we investigate alternative architectures for modeling lexical context in spoken language understanding. In addition to the simplicity of the single model framework, experimental results show the power of such an approach on Microsoft Cortana real user data over alternative methods based on single domain/task deep learning.",
"title": ""
},
{
"docid": "df908e6e88713b14efddfa1c8a91bc8b",
"text": "Enhancing underwater instruments with networking capabilities is opening up exciting new opportunities for remote ocean exploration. Underwater communication is essential in enabling networking, and in this work, we focus specifically on those applications that require a communication range of 5 to 10 meters. We will show that optical wireless technology is an excellent candidate when cost-efficient design is paramount and data rate requirements are modest. Our optical modem uses inexpensive components, and compensates by employing sophisticated detection algorithms. Our design is validated experimentally, establishing it as a viable cost-effective communication technology for low range underwater applications.",
"title": ""
},
{
"docid": "668252a8b0bb419198c03aa96d113655",
"text": "This study aims at revealing how commercial hotness of urban commercial districts (UCDs) is shaped by social contexts of surrounding areas so as to render predictive business planning. We define social contexts for a given region as the number of visitors, the region functions, the population and buying power of local residents, the average price of services, and the rating scores of customers, which are computed from heterogeneous data including taxi GPS trajectories, point of interests, geographical data, and user-generated comments. Then, we apply sparse representation to discover the impactor factor of each variable of the social contexts in terms of predicting commercial activeness of UCDs under a linear predictive model. The experiments show that a linear correlation between social contexts and commercial activeness exists for Beijing and Shanghai based on an average prediction accuracy of 77.69% but the impact factors of social contexts vary from city to city, where the key factors are rich life services, diversity of restaurants, good shopping experience, large number of local residents with relatively high purchasing power, and convenient transportation. This study reveals the underlying mechanism of urban business ecosystems, and promise social context-aware business planning over heterogeneous urban big data.",
"title": ""
},
{
"docid": "9091aff6aff51612460de7b4e63fe03a",
"text": "Stock is a popular topic in Twitter. The number of tweets concerning a stock varies over days, and sometimes exhibits a significant spike. In this paper, we investigate Twitter volume spikes related to S&P 500 stocks, and whether they are useful for stock trading. Through correlation analysis, we provide insight on when Twitter volume spikes occur and possible causes of these spikes. We further explore whether these spikes are surprises to market participants by comparing the implied volatility of a stock before and after a Twitter volume spike. Moreover, we develop a Bayesian classifier that uses Twitter volume spikes to assist stock trading, and show that it can provide substantial profit. We further develop an enhanced strategy that combines the Bayesian classifier and a stock bottom picking method, and demonstrate that it can achieve significant gain in a short amount of time. Simulation over a half year's stock market data indicates that it achieves on average 8.6% gain in 27 trading days and 15.0% gain in 55 trading days. Statistical tests show that the gain is statistically significant, and the enhanced strategy significantly outperforms the strategy that only uses the Bayesian classifier as well as a bottom picking method that uses trading volume spikes.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] |
scidocsrr
|
6b767927b1181a2c049a95e311f56dc9
|
Robot grasping in clutter: Using a hierarchy of supervisors for learning from demonstrations
|
[
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "9dd245f75092adc8d8bb2b151275789b",
"text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"title": ""
}
] |
[
{
"docid": "3ada908b539c3ca23adda1b0791de211",
"text": "Two competing explanations for deviant employee responses to supervisor abuse are tested. A self-gain view is compared with a self-regulation impairment view. The self-gain view suggests that distributive justice (DJ) will weaken the abusive supervision-employee deviance relationship, as perceptions of fair rewards offset costs of abuse. Conversely, the self-regulation impairment view suggests that DJ will strengthen the relationship, as experiencing abuse drains self-resources needed to maintain appropriate behavior, and this effect intensifies when employees receive inconsistent information about their organizational membership (fair outcomes). Three field studies using different samples, measures, and designs support the self-regulation impairment view. Two studies found that the Abusive Supervision × DJ interaction was mediated by self-regulation impairment variables (ego depletion and intrusive thoughts). Implications for theory and research are discussed.",
"title": ""
},
{
"docid": "bf5f53216163a3899cc91af060375250",
"text": "Received Feb 13, 2018 Revised Apr 18, 2018 Accepted May 21, 2018 One of the biomedical image problems is the appearance of the bubbles in the slide that could occur when air passes through the slide during the preparation process. These bubbles may complicate the process of analysing the histopathological images. Aims: The objective of this study is to remove the bubble noise from the histopathology images, and then predict the tissues that underlie it using the fuzzy controller in cases of remote pathological diagnosis. Methods: Fuzzy logic uses the linguistic definition to recognize the relationship between the input and the activity, rather than using difficult numerical equation. Mainly there are five parts, starting with accepting the image, passing through removing the bubbles, and ending with predict the tissues. These were implemented by defining membership functions between colours range using MATLAB. Results: 50 histopathological images were tested on four types of membership functions (MF); the results show that (nine-triangular) MF get 75.4% correctly predicted pixels versus 69.1, 72.31 and 72% for (five-triangular), (five-Gaussian) and (nine-Gaussian) respectively. Conclusions: In line with the era of digitally driven epathology, this process is essentially recommended to ensure quality interpretation and analyses of the processed slides; thus overcoming relevant limitations. Keyword:",
"title": ""
},
{
"docid": "0173524e63580e3a4d85cb41719e0bd6",
"text": "This paper describes some of the possibilities of artificial neural networks that open up after solving the problem of catastrophic forgetting. A simple model and reinforcement learning applications of existing methods are also proposed",
"title": ""
},
{
"docid": "49c19e5417aa6a01c59f666ba7cc3522",
"text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.",
"title": ""
},
{
"docid": "247a6f200670e43980ac7762e52c86eb",
"text": "We propose a novel mechanism for Turing pattern formation that provides a possible explanation for the regular spacing of synaptic puncta along the ventral cord of C. elegans during development. The model consists of two interacting chemical species, where one is passively diffusing and the other is actively trafficked by molecular motors. We identify the former as the kinase CaMKII and the latter as the glutamate receptor GLR-1. We focus on a one-dimensional model in which the motor-driven chemical switches between forward and backward moving states with identical speeds. We use linear stability analysis to derive conditions on the associated nonlinear interaction functions for which a Turing instability can occur. We find that the dimensionless quantity γ = αd/v has to be sufficiently small for patterns to emerge, where α is the switching rate between motor states, v is the motor speed, and d is the passive diffusion coefficient. One consequence is that patterns emerge outside the parameter regime of fast switching where the model effectively reduces to a twocomponent reaction-diffusion system. Numerical simulations of the model using experimentally based parameter values generates patterns with a wavelength consistent with the synaptic spacing found in C. elegans. Finally, in the case of biased transport, we show that the system supports spatially periodic patterns in the presence of boundary forcing, analogous to flow distributed structures in reaction-diffusion-advection systems. Such forcing could represent the insertion of new motor-bound GLR-1 from the soma of ventral cord neurons.",
"title": ""
},
{
"docid": "3508a963a4f99d02d9c41dab6801d8fd",
"text": "The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students’ comprehension and learning. This comprehensive meta-analysis of empirical studies was conducted to examine evidence of the effects of classroom discussion on measures of teacher and student talk and on individual student comprehension and critical-thinking and reasoning outcomes. Results revealed that several discussion approaches produced strong increases in the amount of student talk and concomitant reductions in teacher talk, as well as substantial improvements in text comprehension. Few approaches to discussion were effective at increasing students’ literal or inferential comprehension and critical thinking and reasoning. Effects were moderated by study design, the nature of the outcome measure, and student academic ability. While the range of ages of participants in the reviewed studies was large, a majority of studies were conducted with students in 4th through 6th grades. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "692b02fa6e3b1d04e24db570b7030a3f",
"text": "Once a business performs a complex activity well, the parent organization often wants to replicate that success. But doing that is surprisingly difficult, and businesses nearly always fail when they try to reproduce a best practice. The reason? People approaching best-practice replication are overly optimistic and overconfident. They try to perfect an operation that's running nearly flawlessly, or they try to piece together different practices to create the perfect hybrid. Getting it right the second time (and all the times after that) involves adjusting for overconfidence in your own abilities and imposing strict discipline on the process and the organization. The authors studied numerous business settings to find out how organizational routines were successfully reproduced, and they identified five steps for successful replication. First, make sure you've got something that can be copied and that's worth copying. Some processes don't lend themselves to duplication; others can be copied but maybe shouldn't be. Second, work from a single template. It provides proof success, performance measurements, a tactical approach, and a reference for when problems arise. Third, copy the example exactly, and fourth, make changes only after you achieve acceptable results. The people who developed the template have probably already encountered many of the problems you want to \"fix,\" so it's best to create a working system before you introduce changes. Fifth, don't throw away the template. If your copy doesn't work, you can use the template to identify and solve problems. Best-practice replication, while less glamorous than pure innovation, contributes enormously to the bottom line of most companies. The article's examples--Banc One, Rank Xerox, Intel, Starbucks, and Re/Max Israel--prove that exact copying is a non-trivial, challenging accomplishment.",
"title": ""
},
{
"docid": "ea8b083238554866d36ac41b9c52d517",
"text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .",
"title": ""
},
{
"docid": "6e4d846272030b160b30d56a60eb2cad",
"text": "MapReduce and Spark are two very popular open source cluster computing frameworks for large scale data analytics. These frameworks hide the complexity of task parallelism and fault-tolerance, by exposing a simple programming API to users. In this paper, we evaluate the major architectural components in MapReduce and Spark frameworks including: shuffle, execution model, and caching, by using a set of important analytic workloads. To conduct a detailed analysis, we developed two profiling tools: (1) We correlate the task execution plan with the resource utilization for both MapReduce and Spark, and visually present this correlation; (2) We provide a break-down of the task execution time for in-depth analysis. Through detailed experiments, we quantify the performance differences between MapReduce and Spark. Furthermore, we attribute these performance differences to different components which are architected differently in the two frameworks. We further expose the source of these performance differences by using a set of micro-benchmark experiments. Overall, our experiments show that Spark is about 2.5x, 5x, and 5x faster than MapReduce, for Word Count, k-means, and PageRank, respectively. The main causes of these speedups are the efficiency of the hash-based aggregation component for combine, as well as reduced CPU and disk overheads due to RDD caching in Spark. An exception to this is the Sort workload, for which MapReduce is 2x faster than Spark. We show that MapReduce’s execution model is more efficient for shuffling data than Spark, thus making Sort run faster on MapReduce.",
"title": ""
},
{
"docid": "3bbb7d9e7ec90a4d9ab28dad1727fe70",
"text": "Space-frequency (SF) codes that exploit both spatial and frequency diversity can be designed using orthogonal frequency division multiplexing (OFDM). However, OFDM is sensitive to frequency offset (FO), which generates intercarrier interference (ICI) among subcarriers. We investigate the pair-wise error probability (PEP) performance of SF codes over quasistatic, frequency selective Rayleigh fading channels with FO. We prove that the conventional SF code design criteria remain valid. The negligible performance loss for small FOs (less than 1%), however, increases with FO and with signal to noise ratio (SNR). While diversity can be used to mitigate ICI, as FO increases, the PEP does not rapidly decay with SNR. Therefore, we propose a new class of SF codes called ICI self-cancellation SF (ISC-SF) codes to combat ICI effectively even with high FO (10%). ISC-SF codes are constructed from existing full diversity space-time codes. Importantly, our code design provide a satisfactory tradeoff among error correction ability, ICI reduction and spectral efficiency. Furthermore, we demonstrate that ISC-SF codes can also mitigate the ICI caused by phase noise and time varying channels. Simulation results affirm the theoretical analysis.",
"title": ""
},
{
"docid": "ff69af9c6ce771b0db8caeaa6da5478f",
"text": "The use of Internet as a mean of shopping goods and services is growing over the past decade. Businesses in the e-commerce sector realize that the key factors for success are not limited to the existence of a website and low prices but must also include high standards of e-quality. Research indicates that the attainment of customer satisfaction brings along plenty of benefits. Furthermore, trust is of paramount importance, in ecommerce, due to the fact that that its establishment can diminish the perceived risk of using an internet service. The purpose of this study is to investigate the impact of customer perceived quality of an internet shop on customers’ satisfaction and trust. In addition, the possible effect of customer satisfaction on trust is also examined. An explanatory research approach was adopted in order to identify causal relationships between e-quality, customer satisfaction and trust. This was accomplished through field research by utilizing an interviewer-administered questionnaire. The questionnaire was largely based on existing constructs in relative literature. E-quality was divided into 5 dimensions, namely ease of use, e-scape, customization, responsiveness, and assurance. After being successfully pilot-tested by the managers of 3 Greek companies developing ecommerce software, 4 managers of Greek internet shops and 5 internet shoppers, the questionnaire was distributed to internet shoppers in central Greece. This process had as a result a total of 171 correctly answered questionnaires. Reliability tests and statistical analyses were performed to both confirm scale reliability and test research hypotheses. The findings indicate that all the examined e-quality dimensions expose a significant positive influence on customer satisfaction, with ease of use, e-scape and assurance being the most important ones. One the other hand, rather surprisingly, the only e-quality dimension that proved to have a significant positive impact on trust was customization. Finally, satisfaction was revealed to have a significant positive relation with trust.",
"title": ""
},
{
"docid": "d7987594ba85f601df0d4f5e9352e0d7",
"text": "Minutiae extraction is one of the most important steps for an Automatic Identification and Authentication Systems. Minutiae are the local fingerprint patterns mostly in the form of terminations and bifurcations. True minutiae are needed for further process. Those true minutiae are extracted only from a good quality and better enhanced image. To achieve this, we propose a frequency domain enhancement algorithm based on the Log-Gabor Filtering Technique on the Fast Fourier‟s Frequency domain. The performance of the algorithm is measured in terms of Peak Signal to Noise Ratio and Mean Square Error and Standard Deviations.",
"title": ""
},
{
"docid": "45c1907ce72b0100a3afa8f58e2e39b6",
"text": "Advanced Encryption Standard (AES), a Federal Information Processing Standard (FIPS), is an approved cryptographic algorithm that can be used to protect electronic data. The AES can be programmed in software or built with pure hardware. However Field Programmable Gate Arrays (FPGAs) offer a quicker and more customizable solution. This paper presents the AES algorithm with regard to FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). ModelSim SE PLUS 5.7g software is used for simulation and optimization of the synthesizable VHDL code. Synthesizing and implementation (i.e. Translate, Map and Place and Route) of the code is carried out on Xilinx - Project Navigator, ISE 8.2i suite. All the transformations of both Encryption and Decryption are simulated using an iterative design approach in order to minimize the hardware consumption. Xilinx XC3S400 device of Spartan Family is used for hardware evaluation. This paper proposes a method to integrate the AES encrypter and the AES decrypter. This method can make it a very low-complexity architecture, especially in saving the hardware resource in implementing the AES (Inv) Sub Bytes module and (Inv) Mix columns module etc. Most designed modules can be used for both AES encryption and decryption. Besides, the architecture can still deliver a high data rate in both encryption/decryption operations. The proposed architecture is suited for hardware-critical applications, such as smart card, PDA, and mobile phone, etc.",
"title": ""
},
{
"docid": "9edb698dc4c43202dc1420246942ee75",
"text": "SAT-solvers have turned into essential tools in many areas of applied logic like, for example, hardware verification or satisfiability checking modulo theories. However, although recent implementations are able to solve problems with hundreds of thousands of variables and millions of clauses, much smaller instances remain unsolved. What makes a particular instance hard or easy is at most partially understood – and is often attributed to the instance’s internal structure. By converting SAT instances into graphs and applying established graph layout techniques, this internal structure can be visualized and thus serve as the basis of subsequent analysis. Moreover, by providing tools that animate the structure during the run of a SAT algorithm, dynamic changes of the problem instance become observable. Thus, we expect both to gain new insights into the hardness of the SAT problem and to help in teaching SAT algorithms.",
"title": ""
},
{
"docid": "4e23bf1c89373abaf5dc096f76c893f3",
"text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.",
"title": ""
},
{
"docid": "213f816da43e7ce43e979418e35471e6",
"text": "A novel saliency detection algorithm for video sequences based on the random walk with restart (RWR) is proposed in this paper. We adopt RWR to detect spatially and temporally salient regions. More specifically, we first find a temporal saliency distribution using the features of motion distinctiveness, temporal consistency, and abrupt change. Among them, the motion distinctiveness is derived by comparing the motion profiles of image patches. Then, we employ the temporal saliency distribution as a restarting distribution of the random walker. In addition, we design the transition probability matrix for the walker using the spatial features of intensity, color, and compactness. Finally, we estimate the spatiotemporal saliency distribution by finding the steady-state distribution of the walker. The proposed algorithm detects foreground salient objects faithfully, while suppressing cluttered backgrounds effectively, by incorporating the spatial transition matrix and the temporal restarting distribution systematically. Experimental results on various video sequences demonstrate that the proposed algorithm outperforms conventional saliency detection algorithms qualitatively and quantitatively.",
"title": ""
},
{
"docid": "ca62a58ac39d0c2daaa573dcb91cd2e0",
"text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.",
"title": ""
},
{
"docid": "f7dbb8adec55a4c52563194ecb6f3e8a",
"text": "The emotion of gratitude is thought to have social effects, but empirical studies of such effects have focused largely on the repaying of kind gestures. The current research focused on the relational antecedents of gratitude and its implications for relationship formation. The authors examined the role of naturally occurring gratitude in college sororities during a week of gift-giving from older members to new members. New members recorded reactions to benefits received during the week. At the end of the week and 1 month later, the new and old members rated their interactions and their relationships. Perceptions of benefactor responsiveness predicted gratitude for benefits, and gratitude during the week predicted future relationship outcomes. Gratitude may function to promote relationship formation and maintenance.",
"title": ""
},
{
"docid": "9665c72fd804d630791fdd0bc381d116",
"text": "Social Sharing of Emotion (SSE) occurs when one person shares an emotional experience with another and is considered potentially beneficial. Though social sharing has been shown prevalent in interpersonal communication, research on its occurrence and communication structure in online social networks is lacking. Based on a content analysis of blog posts (n = 540) in a blog social network site (Live Journal), we assess the occurrence of social sharing in blog posts, characterize different types of online SSE, and present a theoretical model of online SSE. A large proportion of initiation expressions were found to conform to full SSE, with negative emotion posts outnumbering bivalent and positive posts. Full emotional SSE posts were found to prevail, compared to partial feelings or situation posts. Furthermore, affective feedback predominated to cognitive and provided emotional support, empathy and admiration. The study found evidence that the process of social sharing occurs in Live Journal, replicating some features of face to face SSE. Instead of a superficial view of online social sharing, our results support a prosocial and beneficial character to online SSE. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "96fe0792e4cf6c88ff7388618eda63ad",
"text": "BACKGROUND\nRecent clinical studies have shown that the dorsal motor nucleus of the vagus nerve is one of the brain areas that are the earliest affected by α-synuclein and Lewy body pathology in Parkinson's disease. This observation raises the question: how the vagus nerve dysfunction affects the dopamine system in the brain?\n\n\nMETHODS\nThe rats underwent surgical implantation of the microchip (MC) in the abdominal region of the vagus. In this study, we examined the effect of chronic, unilateral electrical stimulation of the left nerve vagus, of two different types: low-frequency (MCL) and physiological stimulation (MCPh) on the dopamine and serotonin metabolism determined by high-pressure chromatography with electrochemical detection in rat brain structures.\n\n\nRESULTS\nMCL electrical stimulation of the left nerve vagus in contrast to MCPh stimulation, produced a significant inhibition of dopamine system in rat brain structures. Ex vivo biochemical experiments clearly suggest that MCL opposite to MCPh impaired the function of dopamine system similarly to vagotomy.\n\n\nCONCLUSION\nWe suggest a close relationship between the peripheral vagus nerve impairment and the inhibition of dopamine system in the brain structures. This is the first report of such relationship which may suggest that mental changes (pro-depressive) could occur in the first stage of Parkinson's disease far ahead of motor impairment.",
"title": ""
}
] |
scidocsrr
|
fba623856c5d0dc74c481952da79a420
|
On-Road Motion Planning for Autonomous Vehicles
|
[
{
"docid": "e0f5f73eb496b77cddc5820fb6306f4b",
"text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.",
"title": ""
},
{
"docid": "74afc31d233f76e28b58f019dfc28df4",
"text": "We present a motion planner for autonomous highway driving that adapts the state lattice framework pioneered for planetary rover navigation to the structured environment of public roadways. The main contribution of this paper is a search space representation that allows the search algorithm to systematically and efficiently explore both spatial and temporal dimensions in real time. This allows the low-level trajectory planner to assume greater responsibility in planning to follow a leading vehicle, perform lane changes, and merge between other vehicles. We show that our algorithm can readily be accelerated on a GPU, and demonstrate it on an autonomous passenger vehicle.",
"title": ""
},
{
"docid": "c83fe84cacf01b155705a10dd5885743",
"text": "For decades, humans have dreamed of making cars that could drive themselves, so that travel would be less taxing, and the roads safer for everyone. Toward this goal, we have made strides in motion planning algorithms for autonomous cars, using a powerful new computing tool, the parallel graphics processing unit (GPU). We propose a novel five-dimensional search space formulation that includes both spatial and temporal dimensions, and respects the kinematic and dynamic constraints on a typical automobile. With this formulation, the search space grows linearly with the length of the path, compared to the exponential growth of other methods. We also propose a parallel search algorithm, using the GPU to tackle the curse of dimensionality directly and increase the number of plans that can be evaluated by an order of magnitude compared to a CPU implementation. With this larger capacity, we can evaluate a dense sampling of plans combining lateral swerves and accelerations that represent a range of effective responses to more on-road driving scenarios than have previously been addressed in the literature. We contribute a cost function that evaluates many aspects of each candidate plan, ranking them all, and allowing the behavior of the vehicle to be fine-tuned by changing the ranking. We show that the cost function can be changed on-line by a behavioral planning layer to express preferred vehicle behavior without the brittleness induced by top-down planning architectures. Our method is particularly effective at generating robust merging behaviors, which have traditionally required a delicate and failure-prone coordination between multiple planning layers. Finally, we demonstrate our proposed planner in a variety of on-road driving scenarios in both simulation and on an autonomous SUV, and make a detailed comparison with prior work.",
"title": ""
}
] |
[
{
"docid": "a9309fc2fdd67b70178cd88e948cf2ca",
"text": "............................................................................................................................... I Co-Authorship Statement.................................................................................................... II Acknowledgments............................................................................................................. III Table of",
"title": ""
},
{
"docid": "364b97b33cb615cc2cbee0dc8b677380",
"text": "Strategic problem-solving is a complicated task that requires processing of large amount of information using theoretical knowledge and practical experience. Effective problem-solving requires fast and accurate comprehension and analysis of the issues surrounding the problem. There are several tools, techniques and frameworks that support strategic analysis and decision-making. This paper develops a framework to clarify the relationship between visualization and modelling, and offers a classification scheme for visualization and modelling tools and techniques with a perspective on strategic problem-solving.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "087752f02d461293f3e02950cc599e35",
"text": "Automatically judging sentences for their grammaticality is potentially useful for several purposes — evaluating language technology systems, assessing language competence of second or foreign language learners, and so on. Previous work has examined parser ‘byproducts’, in particular parse probabilities, to distinguish grammatical sentences from ungrammatical ones. The aim of the present paper is to examine whether the primary output of a parser, which we characterise via CFG production rules embodied in a parse, contains useful information for sentence grammaticality classification; and also to examine which feature selection metrics are most useful in this task. Our results show that using gold standard production rules alone can improve over using parse probabilities alone. Combining parser-produced production rules with parse probabilities further produces an improvement of 1.6% on average in the overall classification accuracy.",
"title": ""
},
{
"docid": "c989a73295ca15a6c558d919c9c4958b",
"text": "The segmentation of touching characters is still a challenging task, posing a bottleneck for offline Chinese handwriting recognition. In this paper, we propose an effective over-segmentation method with learning-based filtering using geometric features for single-touching Chinese handwriting. First, we detect candidate cuts by skeleton and contour analysis to guarantee a high recall rate of character separation. A filter is designed by supervised learning and used to prune implausible cuts to improve the precision. Since the segmentation rules and features are independent of the string length, the proposed method can deal with touching strings with more than two characters. The proposed method is evaluated on both the character segmentation task and the text line recognition task. The results on two large databases demonstrate the superiority of the proposed method in dealing with single-touching Chinese handwriting.",
"title": ""
},
{
"docid": "cb6223183d3602d2e67aafc0b835a405",
"text": "Electrocardiogram is widely used to diagnose the congestive heart failure (CHF). It is the primary noninvasive diagnostic tool that can guide in the management and follow-up of patients with CHF. Heart rate variability (HRV) signals which are nonlinear in nature possess the hidden signatures of various cardiac diseases. Therefore, this paper proposes a nonlinear methodology, empirical mode decomposition (EMD), for an automated identification and classification of normal and CHF using HRV signals. In this work, HRV signals are subjected to EMD to obtain intrinsic mode functions (IMFs). From these IMFs, thirteen nonlinear features such as approximate entropy $$ (E_{\\text{ap}}^{x} ) $$ ( E ap x ) , sample entropy $$ (E_{\\text{s}}^{x} ) $$ ( E s x ) , Tsallis entropy $$ (E_{\\text{ts}}^{x} ) $$ ( E ts x ) , fuzzy entropy $$ (E_{\\text{f}}^{x} ) $$ ( E f x ) , Kolmogorov Sinai entropy $$ (E_{\\text{ks}}^{x} ) $$ ( E ks x ) , modified multiscale entropy $$ (E_{{{\\text{mms}}_{y} }}^{x} ) $$ ( E mms y x ) , permutation entropy $$ (E_{\\text{p}}^{x} ) $$ ( E p x ) , Renyi entropy $$ (E_{\\text{r}}^{x} ) $$ ( E r x ) , Shannon entropy $$ (E_{\\text{sh}}^{x} ) $$ ( E sh x ) , wavelet entropy $$ (E_{\\text{w}}^{x} ) $$ ( E w x ) , signal activity $$ (S_{\\text{a}}^{x} ) $$ ( S a x ) , Hjorth mobility $$ (H_{\\text{m}}^{x} ) $$ ( H m x ) , and Hjorth complexity $$ (H_{\\text{c}}^{x} ) $$ ( H c x ) are extracted. Then, different ranking methods are used to rank these extracted features, and later, probabilistic neural network and support vector machine are used for differentiating the highly ranked nonlinear features into normal and CHF classes. We have obtained an accuracy, sensitivity, and specificity of 97.64, 97.01, and 98.24 %, respectively, in identifying the CHF. The proposed automated technique is able to identify the person having CHF alarming (alerting) the clinicians to respond quickly with proper treatment action. Thus, this method may act as a valuable tool for increasing the survival rate of many cardiac patients.",
"title": ""
},
{
"docid": "4a9da1575b954990f98e6807deae469e",
"text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s",
"title": ""
},
{
"docid": "4d2dad29f0f02d448c78b7beda529022",
"text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.",
"title": ""
},
{
"docid": "21a68f76ed6d18431f446398674e4b4e",
"text": "With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.",
"title": ""
},
{
"docid": "7579b5cb9f18e3dc296bcddc7831abc5",
"text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.",
"title": ""
},
{
"docid": "5ec8018ccc26d1772fa5498c31dc2c71",
"text": "High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future.",
"title": ""
},
{
"docid": "2181c4d52e721aab267057b8f271a9ee",
"text": "Recently, the widespread availability of consumer grade drones is responsible for the new concerns of air traffic control. This paper investigates the feasibility of drone detection by passive bistatic radar (PBR) system. Wuhan University has successfully developed a digitally multichannel PBR system, which is dedicated for the drone detection. Two typical trials with a cooperative drone have been designed to examine the system's capability of small drone detection. The agreement between experimental results and ground truth indicate the effectiveness of sensing and processing method, which verifies the practicability and prospects of drone detection by this digitally multichannel PBR system.",
"title": ""
},
{
"docid": "637ca0ccdc858c9e84ffea1bd3531024",
"text": "We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.",
"title": ""
},
{
"docid": "c1f907a8dc5308e07df76c69fd0deb45",
"text": "Emotion regulation has been conceptualized as a process by which individuals modify their emotional experiences, expressions, and physiology and the situations eliciting such emotions in order to produce appropriate responses to the ever-changing demands posed by the environment. Thus, context plays a central role in emotion regulation. This is particularly relevant to the work on emotion regulation in psychopathology, because psychological disorders are characterized by rigid responses to the environment. However, this recognition of the importance of context has appeared primarily in the theoretical realm, with the empirical work lagging behind. In this review, the author proposes an approach to systematically evaluate the contextual factors shaping emotion regulation. Such an approach consists of specifying the components that characterize emotion regulation and then systematically evaluating deviations within each of these components and their underlying dimensions. Initial guidelines for how to combine such dimensions and components in order to capture substantial and meaningful contextual influences are presented. This approach is offered to inspire theoretical and empirical work that it is hoped will result in the development of a more nuanced and sophisticated understanding of the relationship between context and emotion regulation.",
"title": ""
},
{
"docid": "c71a8c9163d6bf294a5224db1ff5c6f5",
"text": "BACKGROUND\nOsteosarcoma is the second most common primary tumor of the skeletal system and the most common primary bone tumor. Usually occurring at the metaphysis of long bones, osteosarcomas are highly aggressive lesions that comprise osteoid-producing spindle cells. Craniofacial osteosarcomas comprise <8% and are believed to be less aggressive and lower grade. Primary osteosarcomas of the skull and skull base comprise <2% of all skull tumors. Osteosarcomas originating from the clivus are rare. We present a case of a primar, high-grade clival osteosarcoma.\n\n\nCASE DESCRIPTION\nA 29-year-old man presented to our institution with a progressively worsening right frontal headache for 3 weeks. There were no sensory or cranial nerve deficits. Computed tomography revealed a destructive mass involving the clivus with extension into the left sphenoid sinus. Magnetic resonance imaging revealed a homogenously enhancing lesion measuring 2.7 × 2.5 × 3.2 cm. The patient underwent endonasal transphenoidal surgery for gross total resection. The histopathologic analysis revealed proliferation of malignant-appearing spindled and epithelioid cells with associated osteoclast-like giant cells and a small area of osteoid production. The analysis was consistent with high-grade osteosarcoma. The patient did well and was discharged on postoperative day 2. He was referred for adjuvant radiation therapy and chemotherapy. Two-year follow-up showed postoperative changes and clival expansion caused by packing material.\n\n\nCONCLUSIONS\nOsteosarcoma is a highly malignant neoplasm. These lesions are usually found in the extremities; however, they may rarely present in the craniofacial region. Clival osteosarcomas are relatively infrequent. We present a case of a primary clival osteosarcoma with high-grade pathology.",
"title": ""
},
{
"docid": "0e8cde83260d6ca4d8b3099628c25fc2",
"text": "1Department of Molecular Virology, Immunology and Medical Genetics, The Ohio State University Medical Center, Columbus, Ohio, USA. 2Department of Physics, Pohang University of Science and Technology, Pohang, Korea. 3School of Interdisciplinary Bioscience and Bioengineering, Pohang, Korea. 4Physics Department, The Ohio State University, Columbus, Ohio, USA. 5These authors contributed equally to this work. e-mail: fishel.7@osu.edu",
"title": ""
},
{
"docid": "45cdd077571a6743e9b50f6f94631fef",
"text": "This work presents a design technique for a class of combining networks used in base stations for mobile communications; the considered network is a triplexer composed by three high selectivity filters presenting arbitrary transmission zeros (both complex and pure imaginary), which are connected at the input port through a suitable 4-port junction. A technique for the design of this combining network is here proposed, which extends a previous approach used for duplexer networks [1]. The design consists in a procedure for the evaluation of the characteristic polynomials of the three composing filters (defined in a normalized frequency domain), once the input junction topology and the transmission zeros of the filters have been assigned. The network synthesized from the characteristic polynomials presents the same reflection zeros of the three filters synthesized separated from the triplexer with an equiripple response in their respective passband; in this way the overall synthesized triplexer presents a quasi-equiripple response in the three passbands. The proposed design approach has been verified through a fabricated prototype, whose measured performance is in good agreement with the expected response obtained from the synthesized network.",
"title": ""
},
{
"docid": "7ad194d865b92f1956ef89f9e8ede31e",
"text": "The Social Media Intelligence Analyst is a new operational role within a State Control Centre in Victoria, Australia dedicated to obtaining situational awareness from social media to support decision making for emergency management. We outline where this role fits within the structure of a command and control organization, describe the requirements for such a position and detail the operational activities expected during an emergency event. As evidence of the importance of this role, we provide three real world examples where important information was obtained from social media which led to improved outcomes for the community concerned. This is the first time a dedicated role has been formally established solely for monitoring social media for emergency management intelligence gathering purposes in Victoria. To the best of our knowledge, it is also the first time such a dedicated position in an operational crisis coordination centre setting has been described in the literature.",
"title": ""
},
{
"docid": "f7e3d9070792af014b4b9ebaaf047e44",
"text": "Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature–engineering property on simulated examples.",
"title": ""
},
{
"docid": "cbce30ed2bbdcd25fb708394dff1b7b6",
"text": "Current syntactic accounts of English resultatives are based on the assumption that result XPs are predicated of underlying direct objects. This assumption has helped to explain the presence of reflexive pronouns with some intransitive verbs but not others and the apparent lack of result XPs predicated of subjects of transitive verbs. We present problems for and counterexamples to some of the basic assumptions of the syntactic approach, which undermine its explanatory power. We develop an alternative account that appeals to principles governing the well-formedness of event structure and the event structure-to-syntax mapping. This account covers the data on intransitive verbs and predicts the distribution of subject-predicated result XPs with transitive verbs.*",
"title": ""
}
] |
scidocsrr
|
32cc0f94addc0417f12e45af5c0910fd
|
A neural network based predictive mechanism for available bandwidth
|
[
{
"docid": "7b94828573579b393a371d64d5125f64",
"text": "This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data.",
"title": ""
}
] |
[
{
"docid": "48716199f7865e8cf16fc723b897bb13",
"text": "The current study aimed to review studies on computational thinking (CT) indexed in Web of Science (WOS) and ERIC databases. A thorough search in electronic databases revealed 96 studies on computational thinking which were published between 2006 and 2016. Studies were exposed to a quantitative content analysis through using an article control form developed by the researchers. Studies were summarized under several themes including the research purpose, design, methodology, sampling characteristics, data analysis, and main findings. The findings were reported using descriptive statistics to see the trends. It was observed that there was an increase in the number of CT studies in recent years, and these were mainly conducted in the field of computer sciences. In addition, CT studies were mostly published in journals in the field of Education and Instructional Technologies. Theoretical paradigm and literature review design were preferred more in previous studies. The most commonly used sampling method was the purposive sampling. It was also revealed that samples of previous CT studies were generally pre-college students. Written data collection tools and quantitative analysis were mostly used in reviewed papers. Findings mainly focused on CT skills. Based on current findings, recommendations and implications for further researches were provided.",
"title": ""
},
{
"docid": "093deb80586f3bb3295354d3878d32cd",
"text": "Augmented feedback (AF) can play an important role when learning or improving a motor skill. As research dealing with AF is broad and diverse, the purpose of this review is to provide the reader with an overview of the use of AF in exercise, motor learning and injury prevention research with respect to how it can be presented, its informational content and the limitations. The term 'augmented' feedback is used because additional information provided by an external source is added to the task-intrinsic feedback that originates from a person's sensory system. In recent decades, numerous studies from various fields within sport science (exercise science, sports medicine, motor control and learning, psychology etc.) have investigated the potential influence of AF on performance improvements. The first part of the review gives a theoretical background on feedback in general but particularly AF. The second part tries to highlight the differences between feedback that is given as knowledge of result and knowledge of performance. The third part introduces studies which have applied AF in exercise and prevention settings. Finally, the limitations of feedback research and the possible reasons for the diverging findings are discussed. The focus of this review lies mainly on the positive influence of AF on motor performance. Underlying neuronal adaptations and theoretical assumptions from learning theories are addressed briefly.",
"title": ""
},
{
"docid": "15b8b0f3682e2eb7c1b1a62be65d6327",
"text": "Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning by increasing the number of training images by a factor of two. However, data augmentation in natural language processing is much less studied. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show the proposed schemes improve performance of baseline and state-of-the-art VQA algorithms.",
"title": ""
},
{
"docid": "526406ca138d241c6d464fa192c7b0e8",
"text": "BACKGROUND AND PURPOSE\nWe sought to determine knowledge at the time of symptom onset regarding the signs, symptoms, and risk factors of stroke in patients presenting to the emergency department with potential stroke.\n\n\nMETHODS\nPatients admitted from the emergency department with possible stroke were identified prospectively. A standardized, structured interview with open-ended questions was performed within 48 hours of symptom onset to assess patients' knowledge base concerning stroke signs, symptoms, and risk factors.\n\n\nRESULTS\nOf the 174 eligible patients, 163 patients were able to respond to the interview questions. Of these 163 patients, 39% (63) did not know a single sign or symptom of stroke. Unilateral weakness (26%) and numbness (22%) were the most frequently noted symptoms. Patients aged > or = 65 years were less likely to know a sign or symptom of stroke than those aged < 65 years (percentage not knowing a single sign or symptom, 47% versus 28%, P = .016). Similarly, 43% of patients did not know a single risk factor for stroke. The elderly were less likely to know a risk factor than their younger counterparts.\n\n\nCONCLUSIONS\nAlmost 40% of patients admitted with a possible stroke did not know the signs, symptoms, or risk factor of a stroke. Further public education is needed to increase awareness of the warning signs and risk factors of stroke.",
"title": ""
},
{
"docid": "79d5f3f80af5d5ad0c25ce9a900c5aae",
"text": "We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.",
"title": ""
},
{
"docid": "50866b91bb03dbf9293d6d6f23a28804",
"text": "The Rapidly-exploring Random Tree (RRT) is a classical algorithm of motion planning based on incremental sampling, which is widely used to solve the planning problem of mobile robots. But it, due to the meandering path, the inaccurate terminal state and the slow exploration, is often inefficient in many applications such as autonomous road vehicles. To address these issues and considering the realistic context of autonomous road vehicles, this paper proposes a fast RRT algorithm that introduces an off-line template set based on the traffic scenes and an aggressive extension strategy of search tree. Both improvements can lead to a faster and more accurate RRT towards the goal. Meanwhile, our approach combines the closed-loop prediction approach using the model of vehicle, which can smooth the portion of off-line template and the portion of on-line tree generated, while a trajectory and control sequence for the vehicle would be obtained. Experimental results illustrate that our method is fast and efficient in solving planning queries of autonomous road vehicle in urban environments.",
"title": ""
},
{
"docid": "bebd28ef9a3c330cb1d2083be5369a20",
"text": "A low voltage differential signaling (LVDS) interface circuit for inter-chip communication in a DSL system has been designed, integrated and verified in 130 nm CMOS technology. Tailored for low supply voltage, the nominal transmitter differential output voltage is 330 mV with 640 mV common-mode (CM) so it is not fully compatible with the LVDS standard. To achieve high data rate performance, DC closed loop control was used in the transmitter together with wide CM input multi-stage receiver and source/load termination. 1.2 Gbps operation at 1.5 V supply was measured on fabricated test chips comprising LVDS transmitter, receiver, serial-to-parallel data framing and clocking. Power dissipation with one set of Receiver/Transmitter active and BIST has been measured to be 67.5 mW and the area of the interface is 0.45 mm2.",
"title": ""
},
{
"docid": "5bb63d07c8d7c743c505e6fd7df3dc4f",
"text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.",
"title": ""
},
{
"docid": "b81f30a692d57ebc2fdef7df652d0ca2",
"text": "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all ε >; 0, there exist coding schemes of rate R ≥ Cs-ε that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.",
"title": ""
},
{
"docid": "cc432f79b99f348863e1371dd1511b77",
"text": "Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.",
"title": ""
},
{
"docid": "3f2936691ba0ea79cd1ac3b04ce0aee9",
"text": "Although the frequentist paradigm has been the predominant approach to clinical trial design since the 1940s, it has several notable limitations. Advancements in computational algorithms and computer hardware have greatly enhanced the alternative Bayesian paradigm. Compared with its frequentist counterpart, the Bayesian framework has several unique advantages, and its incorporation into clinical trial design is occurring more frequently. Using an extensive literature review to assess how Bayesian methods are used in clinical trials, we find them most commonly used for dose finding, efficacy monitoring, toxicity monitoring, diagnosis/decision making, and studying pharmacokinetics/pharmacodynamics. The additional infrastructure required for implementing Bayesian methods in clinical trials may include specialized software programs to run the study design, simulation and analysis, and web-based applications, all of which are particularly useful for timely data entry and analysis. Trial success requires not only the development of proper tools but also timely and accurate execution of data entry, quality control, adaptive randomization, and Bayesian computation. The relative merit of the Bayesian and frequentist approaches continues to be the subject of debate in statistics. However, more evidence can be found showing the convergence of the two camps, at least at the practical level. Ultimately, better clinical trial methods lead to more efficient designs, lower sample sizes, more accurate conclusions, and better outcomes for patients enrolled in the trials. Bayesian methods offer attractive alternatives for better trials. More Bayesian trials should be designed and conducted to refine the approach and demonstrate their real benefit in action.",
"title": ""
},
{
"docid": "4d3ba5824551b06c861fc51a6cae41a5",
"text": "This paper shows a gate driver design for 1.7 kV SiC MOSFET module as well a Rogowski-coil based current sensor for effective short circuit protection. The design begins with the power architecture selection for better common-mode noise immunity as the driver is subjected to high dv/dt due to the very high switching speed of the SiC MOSFET modules. The selection of the most appropriate gate driver IC is made to ensure the best performance and full functionalities of the driver, followed by the circuitry designs of paralleled external current booster, Soft Turn-Off, and Miller Clamp. In addition to desaturation, a high bandwidth PCB-based Rogowski current sensor is proposed to serve as a more effective method for the short circuit protection for the high-cost SiC MOSFET modules.",
"title": ""
},
{
"docid": "a0d35eccc2277279c2a99b7a400cb79f",
"text": "To explore the relationship of gut microbiota with the development of type 2 diabetes (T2DM), we analyzed 121 subjects who were divided into 3 groups based on their glucose intolerance status: normal glucose tolerance (NGT; n = 44), prediabetes (Pre-DM; n = 64), or newly diagnosed T2DM (n = 13). Gut microbiota characterizations were determined with 16S rDNA-based high-throughput sequencing. T2DM-related dysbiosis was observed, including the separation of microbial communities and a change of alpha diversity between the different glucose intolerance statuses. To assess the correlation between metabolic parameters and microbiota diversity, clinical characteristics were also measured and a significant association between metabolic parameters (FPG, CRP) and gut microbiota was found. In addition, a total of 28 operational taxonomic units (OTUs) were found to be related to T2DM status by the Kruskal-Wallis H test, most of which were enriched in the T2DM group. Butyrate-producing bacteria (e.g. Akkermansia muciniphila ATCCBAA-835, and Faecalibacterium prausnitzii L2-6) had a higher abundance in the NGT group than in the pre-DM group. At genus level, the abundance of Bacteroides in the T2DM group was only half that of the NGT and Pre-DM groups. Previously reported T2DM-related markers were also compared with the data in this study, and some inconsistencies were noted. We found that Verrucomicrobiae may be a potential marker of T2DM as it had a significantly lower abundance in both the pre-DM and T2DM groups. In conclusion, this research provides further evidence of the structural modulation of gut microbiota in the pathogenesis of diabetes.",
"title": ""
},
{
"docid": "bee4bd3019983dc7f66cfd3dafc251ac",
"text": "We present a framework to systematically analyze convolutional neural networks (CNNs) used in classification of cars in autonomous vehicles. Our analysis procedure comprises an image generator that produces synthetic pictures by sampling in a lower dimension image modification subspace and a suite of visualization tools. The image generator produces images which can be used to test the CNN and hence expose its vulnerabilities. The presented framework can be used to extract insights of the CNN classifier, compare across classification models, or generate training and validation datasets.",
"title": ""
},
{
"docid": "dbab8fdd07b1180ba425badbd1616bb2",
"text": "The proliferation of cyber-physical systems introduces the fourth stage of industrialization, commonly known as Industry 4.0. The vertical integration of various components inside a factory to implement a flexible and reconfigurable manufacturing system, i.e., smart factory, is one of the key features of Industry 4.0. In this paper, we present a smart factory framework that incorporates industrial network, cloud, and supervisory control terminals with smart shop-floor objects such as machines, conveyers, and products. Then, we provide a classification of the smart objects into various types of agents and define a coordinator in the cloud. The autonomous decision and distributed cooperation between agents lead to high flexibility. Moreover, this kind of self-organized system leverages the feedback and coordination by the central coordinator in order to achieve high efficiency. Thus, the smart factory is characterized by a self-organized multi-agent system assisted with big data based feedback and coordination. Based on this model, we propose an intelligent negotiation mechanism for agents to cooperate with each other. Furthermore, the study illustrates that complementary strategies can be designed to prevent deadlocks by improving the agents’ decision making and the coordinator’s behavior. The simulation results assess the effectiveness of the proposed negotiation mechanism and deadlock prevention strategies. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "de760fd6990bcf3e980e5fab24757621",
"text": "The concept of ‘open innovation’ has received a considerable amount of coverage within the academic literature and beyond. Much of this seems to have been without much critical analysis of the evidence. In this paper, we show how Chesbrough creates a false dichotomy by arguing that open innovation is the only alternative to a closed innovation model. We systematically examine the six principles of the open innovation concept and show how the Open Innovation paradigm has created a partial perception by describing something which is undoubtedly true in itself (the limitations of closed innovation principles), but false in conveying the wrong impression that firms today follow these principles. We hope that our examination and scrutiny of the ‘open innovation’ concept contributes to the debate on innovation management and helps enrich our understanding.",
"title": ""
},
{
"docid": "746f77aad26e3e3492ef021ac0d7da6a",
"text": "The proliferation of mobile computing and smartphone technologies has resulted in an increasing number and range of services from myriad service providers. These mobile service providers support numerous emerging services with differing quality metrics but similar functionality. Facilitating an automated service workflow requires fast selection and composition of services from the services pool. The mobile environment is ambient and dynamic in nature, requiring more efficient techniques to deliver the required service composition promptly to users. Selecting the optimum required services in a minimal time from the numerous sets of dynamic services is a challenge. This work addresses the challenge as an optimization problem. An algorithm is developed by combining particle swarm optimization and k-means clustering. It runs in parallel using MapReduce in the Hadoop platform. By using parallel processing, the optimum service composition is obtained in significantly less time than alternative algorithms. This is essential for handling large amounts of heterogeneous data and services from various sources in the mobile environment. The suitability of this proposed approach for big data-driven service composition is validated through modeling and simulation.",
"title": ""
},
{
"docid": "a7eec693523207e6a9547000c1fbf306",
"text": "Articulated hand tracking systems have been commonly used in virtual reality applications, including systems with human-computer interaction or interaction with game consoles. However, building an effective real-time hand pose tracker remains challenging. In this paper, we present a simple and efficient methodology for tracking and reconstructing 3d hand poses using a markered optical motion capture system. Markers were positioned at strategic points, and an inverse kinematics solver was incorporated to fit the rest of the joints to the hand model. The model is highly constrained with rotational and orientational constraints, allowing motion only within a feasible set. The method is real-time implementable and the results are promising, even with a low frame rate.",
"title": ""
},
{
"docid": "4b8f59d1b416d4869ae38dbca0eaca41",
"text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.",
"title": ""
},
{
"docid": "75233d6d94fec1f43fa02e8043470d4d",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
}
] |
scidocsrr
|
fcbebd940f001b306b7f68486b0a7c77
|
Expression: Visualizing Affective Content from Social Streams
|
[
{
"docid": "88e535a63f5c594edb18167ec8a78750",
"text": "Finding the weakness of the products from the customers’ feedback can help manufacturers improve their product quality and competitive strength. In recent years, more and more people express their opinions about products online, and both the feedback of manufacturers’ products or their competitors’ products could be easily collected. However, it’s impossible for manufacturers to read every review to analyze the weakness of their products. Therefore, finding product weakness from online reviews becomes a meaningful work. In this paper, we introduce such an expert system, Weakness Finder, which can help manufacturers find their product weakness from Chinese reviews by using aspects based sentiment analysis. An aspect is an attribute or component of a product, such as price, degerm, moisturizing are the aspects of the body wash products. Weakness Finder extracts the features and groups explicit features by using morpheme based method and Hownet based similarity measure, and identify and group the implicit features with collocation selection method for each aspect. Then utilize sentence based sentiment analysis method to determine the polarity of each aspect in sentences. The weakness of product could be found because the weakness is probably the most unsatisfied aspect in customers’ reviews, or the aspect which is more unsatisfied when compared with their competitor’s product reviews. Weakness Finder has been used to help a body wash manufacturer find their product weakness, and our experimental results demonstrate the good performance of the Weakness Finder. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "ae5142ef32fde6096ea4e4a41ba60cb6",
"text": "Social media is playing a growing role in elections world-wide. Thus, automatically analyzing electoral tweets has applications in understanding how public sentiment is shaped, tracking public sentiment and polarization with respect to candidates and issues, understanding the impact of tweets from various entities, etc. Here, for the first time, we automatically annotate a set of 2012 US presidential election tweets for a number of attributes pertaining to sentiment, emotion, purpose, and style by crowdsourcing. Overall, more than 100,000 crowdsourced responses were obtained for 13 questions on emotions, style, and purpose. Additionally, we show through an analysis of these annotations that purpose, even though correlated with emotions, is significantly different. Finally, we describe how we developed automatic classifiers, using features from state-of-the-art sentiment analysis systems, to predict emotion and purpose labels, respectively, in new unseen tweets. These experiments establish baseline results for automatic systems on this new data.",
"title": ""
}
] |
[
{
"docid": "b6508d1f2b73b90a0cfe6399f6b44421",
"text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.",
"title": ""
},
{
"docid": "6a65623ddcf2f056cd35724d16805e8f",
"text": "641 It has been over two decades since the discovery of quantum tele portation, in what is arguably one of the most interesting and exciting implications of the ‘weirdness’ of quantum mechanics. Prior to this landmark discovery, the fascinating idea of teleporta tion belonged in the realm of science fiction. First coined in 1931 by Charles H. Fort1, the term ‘teleportation’ has since been used to refer to the process by which bodies and objects are transferred from one location to another, without actually making the jour ney along the way. Since then it has become a fixture of pop cul ture, perhaps best exemplified by Star Trek’s celebrated catchphrase “Beam me up, Scotty.” In 1993, a seminal paper2 described a quantum information protocol, dubbed quantum teleportation, that shares several of the above features. In this protocol, an unknown quantum state of a physical system is measured and subsequently reconstructed or ‘reassembled’ at a remote location (the physical constituents of the original system remain at the sending location). This process requires classical communication and excludes superluminal com munication. Most importantly, it requires the resource of quantum entanglement3,4. Indeed, quantum teleportation can be seen as the protocol in quantum information that most clearly demonstrates the character of quantum entanglement as a resource: without its presence, such a quantum state transfer would not be possible within the laws of quantum mechanics. Quantum teleportation plays an active role in the progress of quantum information science5–8. On the one hand, it is a concep tual protocol that is crucial in the development of formal quantum information theory; on the other, it represents a fundamental ingre dient to the development of many quantum technologies. Quantum repeaters9, quantum gate teleportation10, measurementbased quan tum computing11 and portbased teleportation12 all derive from the basic scheme of quantum teleportation. The vision of a quantum network13 draws inspiration from this scheme. Teleportation has also been used as a simple tool for exploring ‘extreme’ physics, such as closed timelike curves14. Today, quantum teleportation has been achieved in laboratories around the world using a variety of different substrates and technolo gies, including photonic qubits (light polarization15–21, single rail22,23, dual rails24,25, timebin26–28 and spinorbital qubits29), nuclear mag netic resonance (NMR)30, optical modes31–39, atomic ensembles40–43, Advances in quantum teleportation",
"title": ""
},
{
"docid": "74235290789c24ce00d54541189a4617",
"text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "f06e1cd245863415531e65318c97f96b",
"text": "In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms.",
"title": ""
},
{
"docid": "117c66505964344d9c350a4e57a4a936",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "4f096ba7fc6164cdbf5d37676d943fa8",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu",
"title": ""
},
{
"docid": "4d2986dffedadfd425505f9e25c5f6cb",
"text": "BACKGROUND\nThe use of heart rate variability (HRV) in the management of sport training is a practice which tends to spread, especially in order to prevent the occurrence of states of fatigue.\n\n\nOBJECTIVE\nTo estimate the HRV parameters obtained using a heart rate recording, according to different loads of sporting activities, and to make the possible link with the appearance of fatigue.\n\n\nMETHODS\nEight young football players, aged 14.6 years+/-2 months, playing at league level in Rhône-Alpes, training for 10 to 20 h per week, were followed over a period of 5 months, allowing to obtain 54 recordings of HRV in three different conditions: (i) after rest (ii) after a day with training and (iii) after a day with a competitive match.\n\n\nRESULTS\nUnder the effect of a competitive match, the HRV temporal indicators (heart rate, RR interval, and pNN50) were significantly altered compared to the rest day. The analysis of the sympathovagal balance rose significantly as a result of the competitive constraint (0.72+/-0.17 vs. 0.90+/-0.20; p<0.05).\n\n\nCONCLUSION\nThe main results obtained show that the HRV is an objective and non-invasive monitoring of management of the training of young sportsmen. HRV analysis allowed to highlight any neurovegetative adjustments according to the physical loads. Thus, under the effect of an increase of physical and psychological constraints that a football match represents, the LF/HF ratio rises significantly; reflecting increased sympathetic stimulation, which beyond certain limits could be relevant to prevent the emergence of a state of fatigue.",
"title": ""
},
{
"docid": "5236f684bc0fdf11855a439c9d3256f6",
"text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.",
"title": ""
},
{
"docid": "373f0adcc61c010f85bd3839e6bd0fca",
"text": "Clusters in document streams, such as online news articles, can be induced by their textual contents, as well as by the temporal dynamics of their arriving patterns. Can we leverage both sources of information to obtain a better clustering of the documents, and distill information that is not possible to extract using contents only? In this paper, we propose a novel random process, referred to as the Dirichlet-Hawkes process, to take into account both information in a unified framework. A distinctive feature of the proposed model is that the preferential attachment of items to clusters according to cluster sizes, present in Dirichlet processes, is now driven according to the intensities of cluster-wise self-exciting temporal point processes, the Hawkes processes. This new model establishes a previously unexplored connection between Bayesian Nonparametrics and temporal Point Processes, which makes the number of clusters grow to accommodate the increasing complexity of online streaming contents, while at the same time adapts to the ever changing dynamics of the respective continuous arrival time. We conducted large-scale experiments on both synthetic and real world news articles, and show that Dirichlet-Hawkes processes can recover both meaningful topics and temporal dynamics, which leads to better predictive performance in terms of content perplexity and arrival time of future documents.",
"title": ""
},
{
"docid": "d9b19dd523fd28712df61384252d331c",
"text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.",
"title": ""
},
{
"docid": "8f07b133447700536c15edb97d4d8c38",
"text": "Author Title Annotation Domain Genre Caesar De Bello Gallico (BG) ~59,000 wd Source Historiography Pliny Epistulae (Ep) ~18,500 wd Target-1 Letters Ovid Ars Amatoria (AA) ~17,500 wd Target-2 Elegiac Poetry • Active Learning • Maximize improvement rate per additional sentence annotated • Provide user with realistic expectations • Predict expected accuracy gain per sentence annotated • User input augments training data, improves domain coverage",
"title": ""
},
{
"docid": "5d150ffc94f7489f19bf4004fabf4f9c",
"text": "Multi objective optimization is a promising field which is increasingly being encountered in many areas worldwide. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used to solve Multi objective problems. Various multiobjective evolutionary algorithms have been developed. Their principal reason for development is their ability to find multiple Pareto optimal solution in single run. Their Basic motive of evolutionary multiobjective optimization in contrast to singleobjective optimization was optimality, decision making algorithm design (fitness, diversity, and elitism), constraints, and preference. The goal of this paper is to trace the genealogy & review the state of the art of evolutionary multiobjective optimization algorithms.",
"title": ""
},
{
"docid": "43121c7d44b3ad134a2a8ad42b1d43ef",
"text": "Web services are emerging technologies to reuse software as services over the Internet by wrapping underlying computing models with XML. Web services are rapidly evolving and are expected to change the paradigms of both software development and use. This panel will discuss the current status and challenges of Web services technologies.",
"title": ""
},
{
"docid": "a97151d20ac0f25e1897b9e66eb77e9b",
"text": "In this paper, we propose a novel system-Intelligent Personalized Fashion Recommendation System, which creates a new space in web multimedia mining and recommendation. The proposed system significantly helps customers find their most suitable fashion choices in mass fashion information in the virtual space based on multimedia mining. There are three stand-alone models developed in this paper to optimize the analysis of fashion features in mass fashion trend: (i). Interaction and recommender model, which associated clients' personalized demand with the current fashion trend, and helps clients find the most favorable fashion factors in trend. (ii). Evolutionary hierachical fashion multimedia mining model, which creates a hierachical structure to filer the key components of fashion multimedia information in the virtual space, and it proves to be more efficient for web mass multimedia mining in an evolutionary way. (iii). Color tone analysis model, a relevant and straightforward approach for analysis of main color tone as to the skin and clothing is used. In this model, a refined contour extraction of the fashion model method is also developed to solve the dilemma that the accuracy and efficiency of contour extraction in the dynamic and complex video scene. As evidenced by the experiment, the proposed system outperforms in effectiveness on mass fashion information in the virtual space compared with human, and thus developing a personalized and diversified way for fashion recommendation.",
"title": ""
},
{
"docid": "5603dc3ceba1a270506116eaf32377bb",
"text": "OBJECTIVE\nEating at \"fast food\" restaurants has increased and is linked to obesity. This study examined whether living or working near \"fast food\" restaurants is associated with body weight.\n\n\nMETHODS\nA telephone survey of 1033 Minnesota residents assessed body height and weight, frequency of eating at restaurants, and work and home addresses. Proximity of home and work to restaurants was assessed by Global Index System (GIS) methodology.\n\n\nRESULTS\nEating at \"fast food\" restaurants was positively associated with having children, a high fat diet and Body Mass Index (BMI). It was negatively associated with vegetable consumption and physical activity. Proximity of \"fast food\" restaurants to home or work was not associated with eating at \"fast food\" restaurants or with BMI. Proximity of \"non-fast food\" restaurants was not associated with BMI, but was associated with frequency of eating at those restaurants.\n\n\nCONCLUSION\nFailure to find relationships between proximity to \"fast food\" restaurants and obesity may be due to methodological weaknesses, e.g. the operational definition of \"fast food\" or \"proximity\", or homogeneity of restaurant proximity. Alternatively, the proliferation of \"fast food\" restaurants may not be a strong unique cause of obesity.",
"title": ""
},
{
"docid": "82c327ecd5402e7319ecaa416dc8e008",
"text": "The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.",
"title": ""
},
{
"docid": "726c2879354eadc44961ab40c4b1621d",
"text": "This paper provides detailed information on Team Poland’s approach in the electricity price forecasting track of GEFCom2014. A new hybrid model is proposed, consisting of four major blocks: point forecasting, pre-filtering, quantile regression modeling and post-processing. This universal model structure enables independent development of a single block, without affecting performance of the remaining ones. The four-block model design in complemented by including expert judgements, which may be of great importance in periods of unusually high or low electricity demand.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "ac808ecd75ccee74fff89d03e3396f26",
"text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.",
"title": ""
}
] |
scidocsrr
|
cf3f6402837d858a3593acfdbf1b8dc1
|
Supervised Sequential Classification Under Budget Constraints
|
[
{
"docid": "5ae157937813e060a72ecb918d4dc5d1",
"text": "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.",
"title": ""
}
] |
[
{
"docid": "e4e187d6f6d920d3a8e18f8b529bfb23",
"text": "Deep hierarchical reinforcement learning has gained a lot of attention in recent years due to its ability to produce state-of-the-art results in challenging environments where non-hierarchical frameworks fail to learn useful policies. However, as problem domains become more complex, deep hierarchical reinforcement learning can become inefficient, leading to longer convergence times and poor performance. We introduce the Deep Nested Agent framework, which is a variant of deep hierarchical reinforcement learning where information from the main agent is propagated to the low level nested agent by incorporating this information into the nested agent’s state. We demonstrate the effectiveness and performance of the Deep Nested Agent framework by applying it to three scenarios in Minecraft with comparisons to a deep non-hierarchical single agent framework, as well as, a deep hierarchical framework.",
"title": ""
},
{
"docid": "834ba5081965683538fe1e931e9e4af0",
"text": "An Exploratory Study on Issues and Challenges of Agile Software Develop m nt with Scrum by Juyun Joey Cho, Doctor of Philosophy Utah State University, 2010 Major Professor: Dr. David H. Olsen Department: Management Information Systems The purpose of this dissertation was to explore critical issues and challenges t hat might arise in agile software development processes with Scrum. It a lso sought to provide management guidelines to help organizations avoid and overcome barriers in adopting the Scrum method as a future software development method. A qualitative researc h method design was used to capture the knowledge of practitioners and scrutinize the Scrum software development process in its natural settings. An in-depth case s tudy was conducted in two organizations where the Scrum method was fully integrated in every aspect of two organizations’ software development processes. One organizat ion provides large-scale and mission-critical applications and the other provides smalland mediumscale applications. Differences between two organizations provided useful c ontrasts for the data analysis. Data were collected through an email survey, observations, documents, and semistructured face-to-face interviews. The email survey was used to re fine interview",
"title": ""
},
{
"docid": "6ce28e4fe8724f685453a019f253b252",
"text": "This paper is focused on receivables management and possibilities how to use available information technologies. The use of information technologies should make receivables management easier on one hand and on the other hand it makes the processes more efficient. Finally it decreases additional costs and losses connected with enforcing receivables when defaulting debts occur. The situation of use of information technologies is different if the subject is financial or nonfinancial institution. In the case of financial institution loans providing is core business and the processes and their technical support are more sophisticated than in the case of non-financial institutions whose loan providing as invoices is just a supplement to their core business activities. The paper shows use of information technologies in individual cases but it also emphasizes the use of general results for further decision making process. Results of receivables management are illustrated on the data of the Czech Republic.",
"title": ""
},
{
"docid": "33568f7f2079cb3e53ec93cad5b54455",
"text": "We introduce and give formal definitions of attack–defense trees. We argue that these trees are a simple, yet powerful tool to analyze complex security and privacy problems. Our formalization is generic in the sense that it supports different semantical approaches. We present several semantics for attack–defense trees along with usage scenarios, and we show how to evaluate attributes.",
"title": ""
},
{
"docid": "9c20a64fad54b5416b4716090a2e7c51",
"text": "Location-Based Social Networks (LBSNs) enable their users to share with their friends the places they go to and whom they go with. Additionally, they provide users with recommendations for Points of Interest (POI) they have not visited before. This functionality is of great importance for users of LBSNs, as it allows them to discover interesting places in populous cities that are not easy to explore. For this reason, previous research has focused on providing recommendations to LBSN users. Nevertheless, while most existing work focuses on recommendations for individual users, techniques to provide recommendations to groups of users are scarce.\n In this paper, we consider the problem of recommending a list of POIs to a group of users in the areas that the group frequents. Our data consist of activity on Swarm, a social networking app by Foursquare, and our results demonstrate that our proposed Geo-Group-Recommender (GGR), a class of hybrid recommender systems that combine the group geographical preferences using Kernel Density Estimation, category and location features and group check-ins outperform a large number of other recommender systems. Moreover, we find evidence that user preferences differ both in venue category and in location between individual and group activities. We also show that combining individual recommendations using group aggregation strategies is not as good as building a profile for a group. Our experiments show that (GGR) outperforms the baselines in terms of precision and recall at different cutoffs.",
"title": ""
},
{
"docid": "e9779af1233484b2ce9cc23d03c9beec",
"text": "A number of pixel-based image fusion algorithms (using averaging, contrast pyramids, the discrete wavelet transform and the dualtree complex wavelet transform (DT-CWT) to perform fusion) are reviewed and compared with a novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules. The DT-CWT method could dissolve an image into simpler data so we could analyze the characteristic which contained within the image and then fused it with other image that had already been decomposed and DT-CWT could reconstruct the image into its original form without losing its original data. The pixel-based and region-based rules are compared to know each of their capability and performance. Region-based methods have a number of advantages over pixel-based methods. These include: the ability to use more intelligent semantic fusion rules; and for regions with certain properties to be attenuated or accentuated.",
"title": ""
},
{
"docid": "4c58abd95e1d6f55326548ffc52f1665",
"text": "The reliability of CoSi2/p-poly Si electrical fuse (eFUSE) programmed by electromigration for 90nm technology will be presented. Both programmed and unprogrammed fuse elements were shown to be stable through extensive reliability evaluations. A qualification methodology is demonstrated to define an optimized reliable electrical fuse programming window by combining fuse resistance measurements, physical analysis, and functional sensing data. This methodology addresses the impact on electrical fuse reliability caused by process variation and device degradation (e.g., NBTI) in the sensing circuit and allows an adequate margin to ensure electrical fuse reliability over the chip lifetime",
"title": ""
},
{
"docid": "a5b4197bca8d6ad07305aa2b15148b42",
"text": "The Web has dramatically changed the way we express opinions on certain products that we have purchased and used, or for services that we have received in the various industries. Opinions and reviews can be easily posted on the Web, such as in merchant sites, review portals, blogs, Internet forums, and much more. These data are commonly referred to as user-generated content or user-generated media. Both the product manufacturers, as well as potential customers are very interested in this online ‘word-of-mouth’, as it provides product manufacturers information on their customers likes and dislikes, as well as the positive and negative comments on their products whenever available, giving them better knowledge of their products limitations and advantages over competitors; and also providing potential customers with useful and ‘first-hand’ information on the products and/or services to aid in their purchase decision making process. This paper discusses the existing works on opinion mining and sentiment classification of customer feedback and reviews online, and evaluates the different techniques used for the process. It focuses on the areas covered by the evaluated papers, points out the areas that are well covered by many researchers and areas that are neglected in opinion mining and sentiment classification which are open for future research opportunity.",
"title": ""
},
{
"docid": "8c0a8816028e8c50ebccbd812ee3a4e5",
"text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.",
"title": ""
},
{
"docid": "ab430da4dbaae50c2700f3bb9b1dbde5",
"text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.",
"title": ""
},
{
"docid": "cbc1724bf52d033f372fb7e59de2d670",
"text": "The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems.",
"title": ""
},
{
"docid": "3b7cfe02a34014c84847eea4790037e2",
"text": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.",
"title": ""
},
{
"docid": "89e09d83de6b6f1e1c0db9b01c4afbee",
"text": "Speakers are often disfluent, for example, saying \"theee uh candle\" instead of \"the candle.\" Production data show that disfluencies occur more often during references to things that are discourse-new, rather than given. An eyetracking experiment shows that this correlation between disfluency and discourse status affects speech comprehensions. Subjects viewed scenes containing four objects, including two cohort competitors (e.g., camel, candle), and followed spoken instructions to move the objects. The first instruction established one cohort as discourse-given; the other was discourse-new. The second instruction was either fluent or disfluent, and referred to either the given or new cohort. Fluent instructions led to more initial fixations on the given cohort object (replicating Dahan et al., 2002). By contrast, disfluent instructions resulted in more fixations on the new cohort. This shows that discourse-new information can be accessible under some circumstances. More generally, it suggests that disfluency affects core language comprehension processes.",
"title": ""
},
{
"docid": "3092e0006fd965034352e04ba9933a46",
"text": "In classification, it is often difficult or expensive to obtain completely accurate and reliable labels. Indeed, labels may be polluted by label noise, due to e.g. insufficient information, expert mistakes, and encoding errors. The problem is that errors in training labels that are not properly handled may deteriorate the accuracy of subsequent predictions, among other effects. Many works have been devoted to label noise and this paper provides a concise and comprehensive introduction to this research topic. In particular, it reviews the types of label noise, their consequences and a number of state of the art approaches to deal with label noise.",
"title": ""
},
{
"docid": "d7189d3fbc3b91315f56b74a29bf10a0",
"text": "Massive amount of electronic medical records accumulating from patients and populations motivates clinicians and data scientists to collaborate for the advanced analytics to extract knowledge that is essential to address the extensive personalized insights needed for patients, clinicians, providers, scientists, and health policy makers. In this paper, we propose a new predictive approach based on feature representation using deep feature learning and word embedding techniques. Our method uses different deep architectures for feature representation in higher-level abstraction to obtain effective and more robust features from EMRs, and then build prediction models on the top of them. Our approach is particularly useful when the unlabeled data is abundant whereas labeled one is scarce. We investigate the performance of representation learning through a supervised approach. First, we apply our method on a small dataset related to a specific precision medicine problem, which focuses on prediction of left ventricular mass indexed to body surface area (LVMI) as an indicator of heart damage risk in a vulnerable demographic subgroup (African-Americans). Then we use two large datasets from eICU collaborative research database to predict the length of stay in Cardiac-ICU and Neuro-ICU based on high dimensional features. Finally we provide a comparative study and show that our predictive approach leads to better results in comparison with others.",
"title": ""
},
{
"docid": "231e4ca152472e5a07379a40dc107341",
"text": "Movies form a staple source of entertainment and command huge popularity worldwide. Through their motion pictures, film makers create an overwhelming dream like visual experience blending it with sound stimulus, replete with high octane drama and emotions, capturing the imaginations of and having a lasting impact on their viewers (Damjanović et al., 2009), even if it amounts to disconnecting from reality. The dissonance between the reel and the real is especially wide in reference to cinematic portrayals of physicians (Flores, 2002). Recent Hindi cinema has largely portrayed mental illnesses, psychiatrists and their treatment approaches in a stereotyped, ridiculous and/or stigmatizing manner (Banwari, 2011). The malaise however seems to be more generalized and global as there are reports of American movies (Gharaibeh, 2005; Tarsitani et al., 2006), regional Indian movies (Menon and Ranjith, 2009; Prasad et al., 2009) and Greek cinema (Fountoulakis et al., 1998) to be plagued by similar problems of depicting psychiatric illness, the psychiatric profession and patients in bad light. Pirkis et al. (2006) reported that portrayals of mental illness in fictional films and television programmes are frequent and generally negative, and have a cumulative effect on",
"title": ""
},
{
"docid": "adabd3971fa0abe5c60fcf7a8bb3f80c",
"text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).",
"title": ""
},
{
"docid": "b55fa34c0a969e93c3a02edccf4d9dcd",
"text": "This paper describes the Flexible Navigation system that extends the ROS Navigation stack and compatible libraries to separate computation from decision making, and integrates the system with FlexBE — the Flexible Behavior Engine, which provides intuitive supervision with adjustable autonomy. Although the ROS Navigation plugin model offers some customization, many decisions are internal to move_base. In contrast, the Flexible Navigation system separates global planning from local planning and control, and uses a hierarchical finite state machine to coordinate behaviors. The Flexible Navigation system includes Python-based state implementations and ROS nodes derived from the move_base plugin model to provide compatibility with existing libraries as well as future extensibility. The paper concludes with complete system demonstrations in both simulation and hardware using the iRobot Create and Kobuki-based Turtlebot running under ROS Kinetic. The system supports multiple independent robots.",
"title": ""
},
{
"docid": "42ca37dd78bf8b52da5739ad442f203f",
"text": "Frame interpolation attempts to synthesise intermediate frames given one or more consecutive video frames. In recent years, deep learning approaches, and in particular convolutional neural networks, have succeeded at tackling lowand high-level computer vision problems including frame interpolation. There are two main pursuits in this line of research, namely algorithm efficiency and reconstruction quality. In this paper, we present a multi-scale generative adversarial network for frame interpolation (FIGAN). To maximise the efficiency of our network, we propose a novel multi-scale residual estimation module where the predicted flow and synthesised frame are constructed in a coarse-tofine fashion. To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses. We evaluate the proposed approach using a collection of 60fps videos from YouTube-8m. Our results improve the state-of-the-art accuracy and efficiency, and a subjective visual quality comparable to the best performing interpolation method.",
"title": ""
},
{
"docid": "8c80129507b138d1254e39acfa9300fc",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\nhabibima@informatik.hu-berlin.de.",
"title": ""
}
] |
scidocsrr
|
3f08135a0ac8e14303e5c2eb7c87a5ec
|
Automatic knowledge extraction from documents
|
[
{
"docid": "d67ab983c681136864f4a66c5b590080",
"text": "scoring in DeepQA C. Wang A. Kalyanpur J. Fan B. K. Boguraev D. C. Gondek Detecting semantic relations in text is an active problem area in natural-language processing and information retrieval. For question answering, there are many advantages of detecting relations in the question text because it allows background relational knowledge to be used to generate potential answers or find additional evidence to score supporting passages. This paper presents two approaches to broad-domain relation extraction and scoring in the DeepQA question-answering framework, i.e., one based on manual pattern specification and the other relying on statistical methods for pattern elicitation, which uses a novel transfer learning technique, i.e., relation topics. These two approaches are complementary; the rule-based approach is more precise and is used by several DeepQA components, but it requires manual effort, which allows for coverage on only a small targeted set of relations (approximately 30). Statistical approaches, on the other hand, automatically learn how to extract semantic relations from the training data and can be applied to detect a large amount of relations (approximately 7,000). Although the precision of the statistical relation detectors is not as high as that of the rule-based approach, their overall impact on the system through passage scoring is statistically significant because of their broad coverage of knowledge.",
"title": ""
}
] |
[
{
"docid": "00bd59f93d3f5e69dbffad87e5b6e711",
"text": "In this paper, a Bayesian approach to tracking a single target of interest (TOI) using passive sonar is presented. The TOI is assumed to be in the presence of other loud interfering targets, or interferers. To account for the interferers, a single-signal likelihood function (SSLF) is proposed which uses maximum-likelihood estimates (MLEs) in place of nuisance parameters. Since there is uncertainty in signal origin, we propose a computationally efficient method for computing association probabilities. The final proposed SSLF accounts for sidelobe interference from other signals, reflects the uncertainty caused by the array beampattern, is signal-to-noise ratio (SNR) dependent, and reflects uncertainty caused by unknown signal origin. Various examples are considered, which include moving and stationary targets. For the examples, the sensors are assumed to be uniformly spaced linear arrays. The arrays may be stationary or moving and there may be one or more present.",
"title": ""
},
{
"docid": "ba4c8b593db6991507853bb6c8759aea",
"text": "This paper proposes an accurate four-transistor temperature sensor designed, and developed, for thermal testing and monitoring circuits in deep submicron technologies. A previous three-transistor temperature sensor, which utilizes the temperature characteristic of the threshold voltage, shows highly linear characteristics at a power supply voltage of 1.8 V or more; however, the supply voltage is reduced to 1 V in a 90-nm CMOS process. Since the temperature coefficient of the operating point's current at a 1-V supply voltage is steeper than the coefficient at a 1.8-V supply voltage, the operating point's current at high temperature becomes quite small and the output voltage goes into the subthreshold region or the cutoff region. Therefore, the operating condition of the conventional temperature sensor cannot be satisfied at 1-V supply and this causes degradation of linearity. To improve linearity at a 1-V supply voltage, one transistor is added to the conventional sensor. This additional transistor, which works in the saturation region, changes the temperature coefficient gradient of the operating point's current and moves the operating points at each temperature to appropriate positions within the targeted temperature range. The sensor features an extremely small area of 11.6times4.1 mum2 and low power consumption of about 25 muW. The performance of the sensor is highly linear and the predicted temperature error is merely -1.0 to +0.8degC using a two-point calibration within the range of 50degC to 125degC. The sensor has been implemented in the ASPLA CMOS 90-nm 1P7M process and has been tested successfully with a supply voltage of 1 V.",
"title": ""
},
{
"docid": "7abdb102a876d669bdf254f7d91121c1",
"text": "OBJECTIVE\nRegular physical activity (PA) is important for maintaining long-term physical, cognitive, and emotional health. However, few older adults engage in routine PA, and even fewer take advantage of programs designed to enhance PA participation. Though most managed Medicare members have free access to the Silver Sneakers and EnhanceFitness PA programs, the vast majority of eligible seniors do not utilize these programs. The goal of this qualitative study was to better understand the barriers to and facilitators of PA and participation in PA programs among older adults.\n\n\nDESIGN\nThis was a qualitative study using focus group interviews.\n\n\nSETTING\nFocus groups took place at three Group Health clinics in King County, Washington.\n\n\nPARTICIPANTS\nFifty-two randomly selected Group Health Medicare members between the ages of 66 to 78 participated.\n\n\nMETHODS\nWe conducted four focus groups with 13 participants each. Focus group discussions were audio-recorded, transcribed, and analyzed using an inductive thematic approach and a social-ecological framework.\n\n\nRESULTS\nMen and women were nearly equally represented among the participants, and the sample was largely white (77%), well-educated (69% college graduates), and relatively physically active. Prominent barriers to PA and PA program participation were physical limitations due to health conditions or aging, lack of professional guidance, and inadequate distribution of information on available and appropriate PA options and programs. Facilitators included the motivation to maintain physical and mental health and access to affordable, convenient, and stimulating PA options.\n\n\nCONCLUSION\nOlder adult populations may benefit from greater support and information from their providers and health care systems on how to safely and successfully improve or maintain PA levels through later adulthood. Efforts among health care systems to boost PA among older adults may need to consider patient-centered adjustments to current PA programs, as well as alternative methods for promoting overall active lifestyle choices.",
"title": ""
},
{
"docid": "b1d61ca503702f950ef1275b904850e7",
"text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.",
"title": ""
},
{
"docid": "484a7acba548ef132d83fc9931a45071",
"text": "This paper is focused on tracking control for a rigid body payload, that is connected to an arbitrary number of quadrotor unmanned aerial vehicles via rigid links. An intrinsic form of the equations of motion is derived on the nonlinear configuration manifold, and a geometric controller is constructed such that the payload asymptotically follows a given desired trajectory for its position and attitude. The unique feature is that the coupled dynamics between the rigid body payload, links, and quadrotors are explicitly incorporated into control system design and stability analysis. These are developed in a coordinate-free fashion to avoid singularities and complexities that are associated with local parameterizations. The desirable features of the proposed control system are illustrated by a numerical example.",
"title": ""
},
{
"docid": "f56bac3cb4ea99626afa51907e909fa3",
"text": "An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.",
"title": ""
},
{
"docid": "324c6f4592ed201aebdb4a1a87740984",
"text": "In this paper, we propose the Electric Vehicle Routing Problem with Time Windows and Mixed Fleet (E-VRPTWMF) to optimize the routing of a mixed fleet of electric commercial vehicles (ECVs) and conventional internal combustion commercial vehicles (ICCVs). Contrary to existing routing models for ECVs, which assume energy consumption to be a linear function of traveled distance, we utilize a realistic energy consumption model that incorporates speed, gradient and cargo load distribution. This is highly relevant in the context of ECVs because energy consumption determines the maximal driving range of ECVs and the recharging times at stations. To address the problem, we develop an Adaptive Large Neighborhood Search algorithm that is enhanced by a local search for intensification. In numerical studies on newly designed E-VRPTWMF test instances, we investigate the effect of considering the actual load distribution on the structure and quality of the generated solutions. Moreover, we study the influence of different objective functions on solution attributes and on the contribution of ECVs to the overall routing costs. Finally, we demonstrate the performance of the developed algorithm on benchmark instances of the related problems VRPTW and E-VRPTW.",
"title": ""
},
{
"docid": "02621546c67e6457f350d0192b616041",
"text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.",
"title": ""
},
{
"docid": "49e574e30b35811205e55c582eccc284",
"text": "Intracerebral hemorrhage (ICH) is a devastating disease with high rates of mortality and morbidity. The major risk factors for ICH include chronic arterial hypertension and oral anticoagulation. After the initial hemorrhage, hematoma expansion and perihematoma edema result in secondary brain damage and worsened outcome. A rapid onset of focal neurological deficit with clinical signs of increased intracranial pressure is strongly suggestive of a diagnosis of ICH, although cranial imaging is required to differentiate it from ischemic stroke. ICH is a medical emergency and initial management should focus on urgent stabilization of cardiorespiratory variables and treatment of intracranial complications. More than 90% of patients present with acute hypertension, and there is some evidence that acute arterial blood pressure reduction is safe and associated with slowed hematoma growth and reduced risk of early neurological deterioration. However, early optimism that outcome might be improved by the early administration of recombinant factor VIIa (rFVIIa) has not been substantiated by a large phase III study. ICH is the most feared complication of warfarin anticoagulation, and the need to arrest intracranial bleeding outweighs all other considerations. Treatment options for warfarin reversal include vitamin K, fresh frozen plasma, prothrombin complex concentrates, and rFVIIa. There is no evidence to guide the specific management of antiplatelet therapy-related ICH. With the exceptions of placement of a ventricular drain in patients with hydrocephalus and evacuation of a large posterior fossa hematoma, the timing and nature of other neurosurgical interventions is also controversial. There is substantial evidence that management of patients with ICH in a specialist neurointensive care unit, where treatment is directed toward monitoring and managing cardiorespiratory variables and intracranial pressure, is associated with improved outcomes. Attention must be given to fluid and glycemic management, minimizing the risk of ventilator-acquired pneumonia, fever control, provision of enteral nutrition, and thromboembolic prophylaxis. There is an increasing awareness that aggressive management in the acute phase can translate into improved outcomes after ICH.",
"title": ""
},
{
"docid": "72f42589ab86c878517feaab5914cf65",
"text": "This paper proposes an analytical-cum-conceptual framework for understanding the nature of institutions as well as their changes. First, it proposes a new definition of institution based on the notion of common knowledge regarding self-sustaining features of social interactions with a hope to integrate various disciplinary approaches to institutions and their changes. Second, it specifies some generic mechanisms of institutional coherence and change -overlapping social embeddedness, Schumpeterian innovation in bundling games and dynamic institutional complementarities -useful for understanding the dynamic interactions of economic, political, social, organizational and cognitive factors.",
"title": ""
},
{
"docid": "0d3119ef15fb65e75a6fcb355d1efc5a",
"text": "A battery management system (BMS) is a system that manages a rechargeable battery (cell or battery pack), by protecting the battery to operate beyond its safe limits and monitoring its state of charge (SoC) & state of health (SoH). BMS has been the essential integral part of hybrid electrical vehicles (HEVs) & electrical vehicles (EVs). BMS provides safety to the system and user with run time monitoring of battery for any critical hazarder conditions. In the present work, design & simulation of BMS for EVs is presented. The entire model of BMS & all other functional blocks of BMS are implemented in Simulink toolbox of MATLAB R2012a. The BMS presented in this research paper includes Neural Network Controller (NNC), Fuzzy Logic Controller (FLC) & Statistical Model. The battery parameters required to design and simulate the BMS are extracted from the experimental results and incorporated in the model. The Neuro-Fuzzy approach is used to model the electrochemical behavior of the Lead-acid battery (selected for case study) then used to estimate the SoC. The Statistical model is used to address battery's SoH. Battery cycle test results have been used for initial model design, Neural Network training and later; it is transferred to the design & simulation of BMS using Simulink. The simulation results are validated by experimental results and MATLAB/Simulink simulation. This model provides more than 97% accuracy in SoC and reasonably accurate SoH.",
"title": ""
},
{
"docid": "82edffdadaee9ac0a5b11eb686e109a1",
"text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.",
"title": ""
},
{
"docid": "34b3c5ee3ea466c23f5c7662f5ce5b33",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
},
{
"docid": "e7bd7e17d90813d60ca147affb25644d",
"text": "The absence of a comprehensive database of locations where bacteria live is an important obstacle for biologists to understand and study the interactions between bacteria and their habitats. This paper reports the results to a challenge, set forth by the Bacteria Biotopes Task of the BioNLP Shared Task 2013. Two systems are explained: Sub-task 1 system for identifying habitat mentions in unstructured biomedical text and normalizing them through the OntoBiotope ontology and Sub-task 2 system for extracting localization and partof relations between bacteria and habitats. Both approaches rely on syntactic rules designed by considering the shallow linguistic analysis of the text. Sub-task 2 system also makes use of discourse-based rules. The two systems achieve promising results on the shared task test data set.",
"title": ""
},
{
"docid": "7e2b47f3b8fb0dfcef2ea010fab4ba48",
"text": "The purpose of this study is to provide evidence-based and expert consensus recommendations for lung ultrasound with focus on emergency and critical care settings. A multidisciplinary panel of 28 experts from eight countries was involved. Literature was reviewed from January 1966 to June 2011. Consensus members searched multiple databases including Pubmed, Medline, OVID, Embase, and others. The process used to develop these evidence-based recommendations involved two phases: determining the level of quality of evidence and developing the recommendation. The quality of evidence is assessed by the grading of recommendation, assessment, development, and evaluation (GRADE) method. However, the GRADE system does not enforce a specific method on how the panel should reach decisions during the consensus process. Our methodology committee decided to utilize the RAND appropriateness method for panel judgment and decisions/consensus. Seventy-three proposed statements were examined and discussed in three conferences held in Bologna, Pisa, and Rome. Each conference included two rounds of face-to-face modified Delphi technique. Anonymous panel voting followed each round. The panel did not reach an agreement and therefore did not adopt any recommendations for six statements. Weak/conditional recommendations were made for 2 statements, and strong recommendations were made for the remaining 65 statements. The statements were then recategorized and grouped to their current format. Internal and external peer-review processes took place before submission of the recommendations. Updates will occur at least every 4 years or whenever significant major changes in evidence appear. This document reflects the overall results of the first consensus conference on “point-of-care” lung ultrasound. Statements were discussed and elaborated by experts who published the vast majority of papers on clinical use of lung ultrasound in the last 20 years. Recommendations were produced to guide implementation, development, and standardization of lung ultrasound in all relevant settings.",
"title": ""
},
{
"docid": "a12422abe3e142b83f5f242dc754cca1",
"text": "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.",
"title": ""
},
{
"docid": "3dab0441ca1e4fb39296be8006611690",
"text": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user's interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.",
"title": ""
},
{
"docid": "5623ce7ffce8492d637d52975df3ac99",
"text": "The online advertising industry is currently based on two dominant business models: the pay-per-impression model and the pay-per-click model. With the growth of sponsored search during the last few years, there has been a move toward the pay-per-click model as it decreases the risk to small advertisers. An alternative model, discussed but not widely used in the advertising industry, is pay-per-conversion, or more generally, pay-per-action. In this paper, we discuss mechanisms for the pay-per-action model and various challenges involved in designing such mechanisms.",
"title": ""
},
{
"docid": "24e943940f1bd1328dba1de2e15d3137",
"text": "The use of external databases to generate training data, also known as Distant Supervision, has become an effective way to train supervised relation extractors but this approach inherently suffers from noise. In this paper we propose a method for noise reduction in distantly supervised training data, using a discriminative classifier and semantic similarity between the contexts of the training examples. We describe an active learning strategy which exploits hierarchical clustering of the candidate training samples. To further improve the effectiveness of this approach, we study the use of several methods for dimensionality reduction of the training samples. We find that semantic clustering of training data combined with cluster-based active learning allows filtering the training data, hence facilitating the creation of a clean training set for relation extraction, at a reduced manual labeling cost.",
"title": ""
},
{
"docid": "0ebc0724a8c966e93e05fb7fce80c1ab",
"text": "Firms in the financial services industry have been faced with the dramatic and relatively recent emergence of new technology innovations, and process disruptions. The industry as a whole, and many new fintech start-ups are looking for new pathways to successful business models, the creation of enhanced customer experience, and new approaches that result in services transformation. Industry and academic observers believe this to be more of a revolution than a set of less impactful changes, with financial services as a whole due for major improvements in efficiency, in customer centricity and informedness. The long-standing dominance of leading firms that are not able to figure out how to effectively hook up with the “Fintech Revolution” is at stake. This article presents a new fintech innovation mapping approach that enables the assessment of the extent to which there are changes and transformations in four key areas of the financial services industry. We discuss: (1) operations management in financial services, and the changes that are occurring there; (2) technology innovations that have begun to leverage the execution and stakeholder value associated with payments settlement, cryptocurrencies, blockchain technologies, and cross-border payment services; (3) multiple fintech innovations that have impacted lending and deposit services, peer-to-peer (P2P) lending and the use of social media; (4) issues with respect to investments, financial markets, trading, risk management, robo-advisory and related services that are influenced by blockchain and fintech innovations.",
"title": ""
}
] |
scidocsrr
|
e1dd082607bfcef921ce86b9ea05a6b5
|
Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings
|
[
{
"docid": "a3866467e9a5a1ee2e35b9f2e477a3e3",
"text": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.",
"title": ""
}
] |
[
{
"docid": "0fd4b7ed6e3c67fb9d4bb70e83d8796c",
"text": "The biological properties of dietary polyphenols are greatly dependent on their bioavailability that, in turn, is largely influenced by their degree of polymerization. The gut microbiota play a key role in modulating the production, bioavailability and, thus, the biological activities of phenolic metabolites, particularly after the intake of food containing high-molecular-weight polyphenols. In addition, evidence is emerging on the activity of dietary polyphenols on the modulation of the colonic microbial population composition or activity. However, although the great range of health-promoting activities of dietary polyphenols has been widely investigated, their effect on the modulation of the gut ecology and the two-way relationship \"polyphenols ↔ microbiota\" are still poorly understood. Only a few studies have examined the impact of dietary polyphenols on the human gut microbiota, and most were focused on single polyphenol molecules and selected bacterial populations. This review focuses on the reciprocal interactions between the gut microbiota and polyphenols, the mechanisms of action and the consequences of these interactions on human health.",
"title": ""
},
{
"docid": "6b933bbad26efaf65724d0c923330e75",
"text": "This paper presents a 138-170 GHz active frequency doubler implemented in a 0.13 μm SiGe BiCMOS technology with a peak output power of 5.6 dBm and peak power-added efficiency of 7.6%. The doubler achieves a peak conversion gain of 4.9 dB and consumes only 36 mW of DC power at peak drive through the use of a push-push frequency doubling stage optimized for low drive power, along with a low-power output buffer. To the best of our knowledge, this doubler achieves the highest output power, efficiency, and fundamental frequency suppression of all D-band and G-band SiGe HBT frequency doublers to date.",
"title": ""
},
{
"docid": "609729da28fec217c5c7cdbb48b8bde2",
"text": "We introduce a theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search. Instead, it runs many MonteCarlo simulations guided by reinforcement learning from previous proof attempts. We produce several versions of the prover, parameterized by different learning and guiding algorithms. The strongest version of the system is trained on a large corpus of mathematical problems and evaluated on previously unseen problems. The trained system solves within the same number of inferences over 40% more problems than a baseline prover, which is an unusually high improvement in this hard AI domain. To our knowledge this is the first time reinforcement learning has been convincingly applied to solving general mathematical problems on a large scale.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "3585ee8052b23d2ea996dc8ad14cbb04",
"text": "The 5th generation (5G) of mobile radio access technologies is expected to become available for commercial launch around 2020. In this paper, we present our envisioned 5G system design optimized for small cell deployment taking a clean slate approach, i.e. removing most compatibility constraints with the previous generations of mobile radio access technologies. This paper mainly covers the physical layer aspects of the 5G concept design.",
"title": ""
},
{
"docid": "0823b3c01d54f479ca8fe470f0e41c66",
"text": "Social media is emerging as an important information-based communication tool for disaster management. Yet there are many relief organizations that are not able to develop strategies and allocate resources to effectively use social media for disaster management. The reason behind this inability may be a lack of understanding regarding the different functionalities of social media. In this paper, we examine the literature using content analysis to understand the current usage of social media in disaster management. We draw on the honeycomb framework and the results of our content analysis to suggest a new framework that can help in utilizing social media more effectively during the different phases of disaster management. We also discuss the implications of our study. KEywORDS Disaster Management, Disaster Phases, Honeycomb Framework, Social Media Functionality, Social Media",
"title": ""
},
{
"docid": "a0407424fce71b9e4119d1d9fefc5542",
"text": "The design and development of complex engineering products require the efforts and collaboration of hundreds of participants from diverse backgrounds resulting in complex relationships among both people and tasks. Many of the traditional project management tools (PERT, Gantt and CPM methods) do not address problems stemming from this complexity. While these tools allow the modeling of sequential and parallel processes, they fail to address interdependency (feedback and iteration), which is common in complex product development (PD) projects. To address this issue, a matrix-based tool called the Design Structure Matrix (DSM) has evolved. This method differs from traditional project-management tools because it focuses on representing information flows rather than work flows. The DSM method is an information exchange model that allows the representation of complex task (or team) relationships in order to determine a sensible sequence (or grouping) for the tasks (or teams) being modeled. This article will cover how the basic method works and how you can use the DSM to improve the planning, execution, and management of complex PD projects using different algorithms (i.e., partitioning, tearing, banding, clustering, simulation, and eigenvalue analysis). Introduction: matrices and projects Consider a system (or project) that is composed of two elements /sub-systems (or activities/phases): element \"A\" and element \"B\". A graph may be developed to represent this system pictorially. The graph is constructed by allowing a vertex/node on the graph to represent a system element and an edge joining two nodes to represent the relationship between two system elements. The directionality of influence from one element to another is captured by an arrow instead of a simple link. The resultant graph is called a directed graph or simply a digraph. There are three basic building blocks for describing the relationship amongst system elements: parallel (or concurrent), sequential (or dependent) and coupled (or interdependent) (fig. 1) Fig.1 Three Configurations that Characterize a System Relationship Parallel Sequential Coupled Graph Representation A B A",
"title": ""
},
{
"docid": "1beba2c797cb5a4b72b54fd71265a25f",
"text": "Modularity is widely used to effectively measure the strength of the community structure found by community detection algorithms. However, modularity maximization suffers from two opposite yet coexisting problems: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones. The latter tendency is known in the literature as the resolution limit problem. To address them, we propose to modify modularity by subtracting from it the fraction of edges connecting nodes of different communities and by including community density into modularity. We refer to the modified metric as Modularity Density and we demonstrate that it indeed resolves both problems mentioned above. We describe the motivation for introducing this metric by using intuitively clear and simple examples. We also prove that this new metric solves the resolution limit problem. Finally, we discuss the results of applying this metric, modularity, and several other popular community quality metrics to two real dynamic networks. The results imply that Modularity Density is consistent with all the community quality measurements but not modularity, which suggests that Modularity Density is an improved measurement of the community quality compared to modularity.",
"title": ""
},
{
"docid": "781ebbf85a510cfd46f0c824aa4aba7e",
"text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.",
"title": ""
},
{
"docid": "34992b86a8ac88c5f5bbca770954ae61",
"text": "Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations.\n This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.",
"title": ""
},
{
"docid": "afa7ccbc17103f199abc38e98b6049bf",
"text": "Cloud computing is becoming a popular paradigm. Many recent new services are based on cloud environments, and a lot of people are using cloud networks. Since many diverse hosts and network configurations coexist in a cloud network, it is essential to protect each of them in the cloud network from threats. To do this, basically, we can employ existing network security devices, but applying them to a cloud network requires more considerations for its complexity, dynamism, and diversity. In this paper, we propose a new framework, CloudWatcher, which provides monitoring services for large and dynamic cloud networks. This framework automatically detours network packets to be inspected by pre-installed network security devices. In addition, all these operations can be implemented by writing a simple policy script, thus, a cloud network administrator is able to protect his cloud network easily. We have implemented the proposed framework, and evaluated it on different test network environments.",
"title": ""
},
{
"docid": "921d9dc34f32522200ddcd606d22b6b4",
"text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.",
"title": ""
},
{
"docid": "22293b6953e2b28e1b3dc209649a7286",
"text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.",
"title": ""
},
{
"docid": "fe03dc323c15d5ac390e67f9aa0415b8",
"text": "Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that they convey significant information about material properties and physical interactions.",
"title": ""
},
{
"docid": "c82901a585d9c924f4686b4d0373e774",
"text": "Object detection is a major challenge in computer vision, involving both object classification and object localization within a scene. While deep neural networks have been shown in recent years to yield very powerful techniques for tackling the challenge of object detection, one of the biggest challenges with enabling such object detection networks for widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been an increasing focus in exploring small deep neural network architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired by the efficiency of the Fire microarchitecture introduced in SqueezeNet and the object detection performance of the singleshot detection macroarchitecture introduced in SSD, this paper introduces Tiny SSD, a single-shot detection deep convolutional neural network for real-time embedded object detection that is composed of a highly optimized, non-uniform Fire subnetwork stack and a non-uniform sub-network stack of highly optimized SSD-based auxiliary convolutional feature layers designed specifically to minimize model size while maintaining object detection performance. The resulting Tiny SSD possess a model size of 2.3MB (~26X smaller than Tiny YOLO) while still achieving an mAP of 61.3% on VOC 2007 (~4.2% higher than Tiny YOLO). These experimental results show that very small deep neural network architectures can be designed for real-time object detection that are well-suited for embedded scenarios.",
"title": ""
},
{
"docid": "20cb6f1ecf0464751a3af5947f708c4d",
"text": "Article History Received: 4 April 2018 Revised: 30 April 2018 Accepted: 2 May 2018 Published: 4 May 2018",
"title": ""
},
{
"docid": "b8032e13156e0168e2c5850cdf452e5b",
"text": "We observe that end-to-end memory networks (MN) trained for task-oriented dialogue, such as for recommending restaurants to a user, suffer from an out-ofvocabulary (OOV) problem – the entities returned by the Knowledge Base (KB) may not be seen by the network at training time, making it impossible for it to use them in dialogue. We propose a Hierarchical Pointer Memory Network (HyP-MN), in which the next word may be generated from the decode vocabulary or copied from a hierarchical memory maintaining KB results and previous utterances. Evaluating over the dialog bAbI tasks, we find that HyP-MN drastically outperforms MN obtaining 12% overall accuracy gains. Further analysis reveals that MN fails completely in recommending any relevant restaurant, whereas HyP-MN recommends the best next restaurant 80% of the time.",
"title": ""
},
{
"docid": "831845dfb48d2bd9d7d86031f3862fa5",
"text": "This paper presents the analysis and implementation of an LCLC resonant converter working as maximum power point tracker (MPPT) in a PV system. This converter must guarantee a constant DC output voltage and must vary its effective input resistance in order to extract the maximum power of the PV generator. Preliminary analysis concludes that not all resonant load topologies can achieve the design conditions for a MPPT. Only the LCLC and LLC converter are suitable for this purpose.",
"title": ""
},
{
"docid": "7105302557aa312e3dedbc7d7cc6e245",
"text": "a Canisius College, Richard J. Wehle School of Business, Department of Management and Marketing, 2001 Main Street, Buffalo, NY 14208-1098, United States b Clemson University, College of Business and Behavioral Science, Department of Marketing, 245 Sirrine Hall, Clemson, SC 29634-1325, United States c University of Alabama at Birmingham, School of Business, Department of Marketing, Industrial Distribution and Economics, 1150 10th Avenue South, Birmingham, AL 35294, United States d Vlerick School of Management Reep 1, BE-9000 Ghent Belgium",
"title": ""
},
{
"docid": "1be58e70089b58ca3883425d1a46b031",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
}
] |
scidocsrr
|
005d286665081ee16ba0e8d8554eddf8
|
Using Deep Learning for Classification of Lung Nodules on Computed Tomography Images
|
[
{
"docid": "4bec71105c8dca3d0b48e99cdd4e809a",
"text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
"title": ""
},
{
"docid": "bcd09ee8fc375ca408ba30077d46ff27",
"text": "This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems.",
"title": ""
}
] |
[
{
"docid": "041f01bfb8981683bd8bfae4991e098f",
"text": "Audio description (AD) has become a cultural revolution for the visually impaired; however, the range of AD beneficiaries can be much broader. We claim that AD is useful for guiding children's attention. The paper presents an eye-tracking study testing the usefulness of AD in selective attention to described elements of a video scene. Forty-four children watched 2 clips from an educational animation series while their eye movements were recorded. Average fixation duration, fixation count, and saccade amplitude served as primary dependent variables. The results confirmed that AD guides children's attention towards described objects resulting e. g., in more fixations on specific regions of interest. We also evaluated eye movement patterns in terms of switching between focal and ambient processing. We postulate that audio description could complement regular teaching tools for guiding and focusing children's attention, especially when new concepts are introduced.",
"title": ""
},
{
"docid": "7b6243599290648d17bd679e82822ade",
"text": "A comprehensive review of online, official, and scientific literature was carried out in 2012-13 to develop a framework of disaster social media. This framework can be used to facilitate the creation of disaster social media tools, the formulation of disaster social media implementation processes, and the scientific study of disaster social media effects. Disaster social media users in the framework include communities, government, individuals, organisations, and media outlets. Fifteen distinct disaster social media uses were identified, ranging from preparing and receiving disaster preparedness information and warnings and signalling and detecting disasters prior to an event to (re)connecting community members following a disaster. The framework illustrates that a variety of entities may utilise and produce disaster social media content. Consequently, disaster social media use can be conceptualised as occurring at a number of levels, even within the same disaster. Suggestions are provided on how the proposed framework can inform future disaster social media development and research.",
"title": ""
},
{
"docid": "ca7fc2fc0951a004101f330c506b800c",
"text": "There is considerable interest in the use of statistical process control (SPC) in healthcare. Although SPC is part of an overall philosophy of continual improvement, the implementation of SPC usually requires the production of control charts. However, as SPC is relatively new to healthcare practitioners and is not routinely featured in medical statistics texts/courses, there is a need to explain the issues involved in the selection and construction of control charts in practice. Following a brief overview of SPC in healthcare and preliminary issues, we use a tutorial-based approach to illustrate the selection and construction of four commonly used control charts (xmr-chart, p-chart, u-chart, c-chart) using examples from healthcare. For each control chart, the raw data, the relevant formulae and their use and interpretation of the final SPC chart are provided together with a notes section highlighting important issues for the SPC practitioner. Some more advanced topics are also mentioned with suggestions for further reading.",
"title": ""
},
{
"docid": "5878d3cdbf74928fa002ab21cc62612f",
"text": "We focus on the multi-label categorization task for short texts and explore the use of a hierarchical structure (HS) of categories. In contrast to the existing work using non-hierarchical flat model, the method leverages the hierarchical relations between the categories to tackle the data sparsity problem. The lower the HS level, the worse the categorization performance. Because lower categories are fine-grained and the amount of training data per category is much smaller than that in an upper level. We propose an approach which can effectively utilize the data in the upper levels to contribute categorization in the lower levels by applying a Convolutional Neural Network (CNN) with a finetuning technique. The results using two benchmark datasets show that the proposed method, Hierarchical Fine-Tuning based CNN (HFTCNN) is competitive with the state-of-the-art CNN based methods.",
"title": ""
},
{
"docid": "44571894699422b160ae62c1d0b35380",
"text": "The fire accident usually causes economical and ecological damage as well as cause danger to people's lives. Therefore, its early detection is must for controlling this damage. Also smoke is considered as main constituent of fire, thus an efficient smoke detection algorithm on sequences of frame obtained from static camera is proposed. It is based on computer vision based technology. This algorithm uses color feature of smoke & is comprised of following steps: reading the image, preprocessing, classify color pixels using k-means segmentation. This paper discusses mainly the segmentation problem. It adopts L*a*b* color space and k-means clustering algorithm to isolate the smoke from video sequences. Finally the K-means algorithm used is compared with the fuzzy c-means algorithm used previously.",
"title": ""
},
{
"docid": "b3012ab055e3f4352b3473700c30c085",
"text": "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45% improvement accordingly in mean average precision (mAP).",
"title": ""
},
{
"docid": "c8d235d1fd40e972e9bc7078d6472776",
"text": "Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. We introduce HYPERBAND for this framework and analyze its theoretical properties, providing several desirable guarantees. We compare HYPERBAND with state-ofthe-art Bayesian optimization methods and a random search baseline on a comprehensive benchmark including 117 datasets. Our results on this benchmark demonstrate that while Bayesian optimization methods do not outperform random search trained for twice as long, HYPERBAND in favorable settings offers valuable speedups.",
"title": ""
},
{
"docid": "02c00d998952d935ee694922953c78d1",
"text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.",
"title": ""
},
{
"docid": "e3408b56cf9e92c773b521ff3fc40834",
"text": "In this letter, we present a new approach for object classification in continuously streamed Lidar point clouds collected from urban areas. The input of our framework is raw 3-D point cloud sequences captured by a Velodyne HDL-64 Lidar, and we aim to extract all vehicles and pedestrians in the neighborhood of the moving sensor. We propose a complete pipeline developed especially for distinguishing outdoor 3-D urban objects. First, we segment the point cloud into regions of ground, short objects (i.e., low foreground), and tall objects (high foreground). Then, using our novel two-layer grid structure, we perform efficient connected component analysis on the foreground regions, for producing distinct groups of points, which represent different urban objects. Next, we create depth images from the object candidates, and apply an appearance-based preliminary classification by a convolutional neural network. Finally, we refine the classification with contextual features considering the possible expected scene topologies. We tested our algorithm in real Lidar measurements, containing 1485 objects captured from different urban scenarios.",
"title": ""
},
{
"docid": "42903610920a47773627a33db25590f3",
"text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.",
"title": ""
},
{
"docid": "5ed1f4c5f554a29de926f6d4980cda89",
"text": "Capsule Networks (CapsNet) are recently proposed multi-stage computational models specialized for entity representation and discovery in image data. CapsNet employs iterative routing that shapes how the information cascades through different levels of interpretations. In this work, we investigate i) how the routing affects the CapsNet model fitting, ii) how the representation by capsules helps discover global structures in data distribution and iii) how learned data representation adapts and generalizes to new tasks. Our investigation shows: i) routing operation determines the certainty with which one layer of capsules pass information to the layer above, and the appropriate level of certainty is related to the model fitness, ii) in a designed experiment using data with a known 2D structure, capsule representations allow more meaningful 2D manifold embedding than neurons in a standard CNN do and iii) compared to neurons of standard CNN, capsules of successive layers are less coupled and more adaptive to new data distribution.",
"title": ""
},
{
"docid": "787979d6c1786f110ff7a47f09b82907",
"text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.",
"title": ""
},
{
"docid": "8e3f8fca93ca3106b83cf85d20c061ca",
"text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.",
"title": ""
},
{
"docid": "a2fb1ee73713544852292721dce21611",
"text": "Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising.",
"title": ""
},
{
"docid": "9665c72fd804d630791fdd0bc381d116",
"text": "Social Sharing of Emotion (SSE) occurs when one person shares an emotional experience with another and is considered potentially beneficial. Though social sharing has been shown prevalent in interpersonal communication, research on its occurrence and communication structure in online social networks is lacking. Based on a content analysis of blog posts (n = 540) in a blog social network site (Live Journal), we assess the occurrence of social sharing in blog posts, characterize different types of online SSE, and present a theoretical model of online SSE. A large proportion of initiation expressions were found to conform to full SSE, with negative emotion posts outnumbering bivalent and positive posts. Full emotional SSE posts were found to prevail, compared to partial feelings or situation posts. Furthermore, affective feedback predominated to cognitive and provided emotional support, empathy and admiration. The study found evidence that the process of social sharing occurs in Live Journal, replicating some features of face to face SSE. Instead of a superficial view of online social sharing, our results support a prosocial and beneficial character to online SSE. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea7381f641c13efef1b9d838cd0a3b62",
"text": "We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers. According to existing work, adversarial attacks identify weakly correlated or non-predictive features learned by the classifier during training and design the adversarial noise to utilize these features. Therefore, highly predictive features should be used first during classification in order to determine the set of possible output labels. Our methodology focuses the problem of designing resilient classifiers into a problem of designing resilient feature extractors for these highly predictive features. We provide two theorems, which support our methodology. The Serial Composition Resilience and Parallel Composition Resilience theorems show that the output of adversarially resilient feature extractors can be combined to create an equally resilient classifier. Based on our theoretical results, we outline the design of an adversarially resilient classifier.",
"title": ""
},
{
"docid": "5ce47e3509865f7ff1c0a8521c13e857",
"text": "Breast cancer is the most common cancer in women and thus the early stage detection in breast cancer can provide potential advantage in the treatment of this disease. Early treatment not only helps to cure cancer but also helps in its prevention of its recurrence. Data mining algorithms can provide great assistance in prediction of earl y stage breast cancer that always has been a challenging research problem. The main objective of this research is to find how precisely can these data mining algorithms predict the probability of recurrence of the disease among the patients on the basis of important stated parameters. The research highlights the performance of different clustering and classification algorithms on the dataset. Experiments show that classification algorithms are better predictors than clustering algorithms. The result indicates that the decision tree (C5.0) and SVM is the best predictor with 81% accuracy on the holdout sample and fuzzy c-means came with the lowest accuracy of37% among the algorithms used in this paper.",
"title": ""
},
{
"docid": "9131f56c00023a3402b602940be621bb",
"text": "Location estimation of a wireless capsule endoscope at 400 MHz MICS band is implemented here using both RSSI and TOA-based techniques and their performance investigated. To improve the RSSI-based location estimation, a maximum likelihood (ML) estimation method is employed. For the TOA-based localization, FDTD coupled with continuous wavelet transform (CWT) is used to estimate the time of arrival and localization is performed using multilateration. The performances of the proposed localization algorithms are evaluated using a computational heterogeneous biological tissue phantom in the 402MHz-405MHz MICS band. Our investigations reveal that the accuracy obtained by TOA based method is superior to RSSI based estimates. It has been observed that the ML method substantially improves the accuracy of the RSSI-based location estimation.",
"title": ""
},
{
"docid": "7f6fb7ae1a87833e647ebdb666ffdd8c",
"text": "The aim of this study was to evaluate the presence of Vibrio isolates recovered from four different fish pond facilities in Benin City, Nigeria, determine their antibiogram profiles, and evaluate the public health implications of these findings. Fish pond water samples were collected from four sampling sites between March and September 2014. A total of 56 samples were collected and screened for the isolation of Vibrio species using standard culture-based methods. Polymerase chain reaction (PCR) was used to confirm the identities of the Vibrio species using the genus-specific and species-specific primers. Vibrio species were detected at all the study sites at a concentration on the order of 10(3) and 10(6) CFU/100 ml. A total of 550 presumptive Vibrio isolates were subjected to PCR confirmation. Of these isolates, 334 isolates tested positive, giving an overall Vibrio prevalence rate of 60.7%. The speciation of the 334 Vibrio isolates from fish ponds yielded 32.63% Vibrio fluvialis, 20.65% Vibrio parahaemolyticus, 18.26% Vibrio vulnificus, and 28.44% other Vibrio species. In all, 167 confirmed Vibrio isolates were selected from a pool of 334 confirmed Vibrio isolates for antibiogram profiling. The susceptibility profiles of 20 antimicrobial agents on the isolates revealed a high level of resistance for AMP(R), ERY(R), NAL(R), SUL(R), TMP(R), SXT(R), TET(R), OTC(R), and CHL(R). The percentage of multiple drug resistance Vibrio isolates was 67.6%. The multiple antibiotic resistance index mean value of 0.365 for the Vibrio isolates found in this study indicated that the Vibrio isolates were exposed to high-risk sources of contamination when antibiotics were frequently used. The resistant Vibrio strains could be transmitted through the food chain to humans and therefore constitutes a risk to public health.",
"title": ""
},
{
"docid": "bf333ff6237d875c34a5c62b0216d5d9",
"text": "The design of tall buildings essentially involves a conceptual design, approximate analysis, preliminary design and optimization, to safely carry gravity and lateral loads. The design criteria are, strength, serviceability, stability and human comfort. The strength is satisfied by limit stresses, while serviceability is satisfied by drift limits in the range of H/500 to H/1000. Stability is satisfied by sufficient factor of safety against buckling and P-Delta effects. The factor of safety is around 1.67 to 1.92. The human comfort aspects are satisfied by accelerations in the range of 10 to 25 milli-g, where g=acceleration due to gravity, about 981cms/sec^2. The aim of the structural engineer is to arrive at suitable structural schemes, to satisfy these criteria, and assess their structural weights in weight/unit area in square feet or square meters. This initiates structural drawings and specifications to enable construction engineers to proceed with fabrication and erection operations. The weight of steel in lbs/sqft or in kg/sqm is often a parameter the architects and construction managers are looking for from the structural engineer. This includes the weights of floor system, girders, braces and columns. The premium for wind, is optimized to yield drifts in the range of H/500, where H is the height of the tall building. Herein, some aspects of the design of gravity system, and the lateral system, are explored. Preliminary design and optimization steps are illustrated with examples of actual tall buildings designed by CBM Engineers, Houston, Texas, with whom the author has been associated with during the past 3 decades. Dr.Joseph P.Colaco, its President, has been responsible for the tallest buildings in Los Angeles, Houston, St. Louis, Dallas, New Orleans, and Washington, D.C, and with the author in its design staff as a Senior Structural Engineer. Research in the development of approximate methods of analysis, and preliminary design and optimization, has been conducted at WPI, with several of the author’s graduate students. These are also illustrated. Software systems to do approximate analysis of shear-wall frame, framed-tube, out rigger braced tall buildings are illustrated. Advanced Design courses in reinforced and pre-stressed concrete, as well as structural steel design at WPI, use these systems. Research herein, was supported by grants from NSF, Bethlehem Steel, and Army.",
"title": ""
}
] |
scidocsrr
|
7187918876edcc1d36916b6d76334f49
|
Prognostics Methods for Battery Health Monitoring Using a Bayesian Framework
|
[
{
"docid": "5e286453dfe55de305b045eaebd5f8fd",
"text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.",
"title": ""
},
{
"docid": "c12e906e6841753657ffe7630145708b",
"text": "We present here a complete dynamic model of a lithium ion battery that is suitable for virtual-prototyping of portable battery-powered systems. The model accounts for nonlinear equilibrium potentials, rateand temperature-dependencies, thermal effects and response to transient power demand. The model is based on publicly available data such as the manufacturers’ data sheets. The Sony US18650 is used as an example. The model output agrees both with manufacturer’s data and with experimental results. The model can be easily modified to fit data from different batteries and can be extended for wide dynamic ranges of different temperatures and current rates.",
"title": ""
}
] |
[
{
"docid": "4b988535edefeb3ff7df89bcb900dd1c",
"text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the",
"title": ""
},
{
"docid": "60114bebc1b64a3bfd5dc010a1a4891c",
"text": "Attachment anxiety is expected to be positively associated with dependence and self-criticism. However, attachment avoidance is expected to be negatively associated with dependence but positively associated with self-criticism. Both dependence and self-criticism are expected to be related to depressive symptoms. Data were analyzed from 424 undergraduate participants at a large Midwestern university, using structural equation modeling. Results indicated that the relation between attachment anxiety and depressive symptoms was fully mediated by dependence and self-criticism, whereas the relation between attachment avoidance and depressive symptoms was partially mediated by dependence and self-criticism. Moreover, through a multiple-group comparison analysis, the results indicated that men with high levels of attachment avoidance are more likely than women to be self-critical.",
"title": ""
},
{
"docid": "10f6ae0e9c254279b0cf0f5e98caa9cd",
"text": "The automatic assessment of photo quality from an aesthetic perspective is a very challenging problem. Most existing research has predominantly focused on the learning of a universal aesthetic model based on hand-crafted visual descriptors . However, this research paradigm can achieve only limited success because (1) such hand-crafted descriptors cannot well preserve abstract aesthetic properties , and (2) such a universal model cannot always capture the full diversity of visual content. To address these challenges, we propose in this paper a novel query-dependent aesthetic model with deep learning for photo quality assessment. In our method, deep aesthetic abstractions are discovered from massive images , whereas the aesthetic assessment model is learned in a query- dependent manner. Our work addresses the first problem by learning mid-level aesthetic feature abstractions via powerful deep convolutional neural networks to automatically capture the underlying aesthetic characteristics of the massive training images . Regarding the second problem, because photographers tend to employ different rules of photography for capturing different images , the aesthetic model should also be query- dependent . Specifically, given an image to be assessed, we first identify which aesthetic model should be applied for this particular image. Then, we build a unique aesthetic model of this type to assess its aesthetic quality. We conducted extensive experiments on two large-scale datasets and demonstrated that the proposed query-dependent model equipped with learned deep aesthetic abstractions significantly and consistently outperforms state-of-the-art hand-crafted feature -based and universal model-based methods.",
"title": ""
},
{
"docid": "ea5bc45b903df5c1293bc437a437ac83",
"text": "1045 We have all visited several stores to check prices and/or to find the right item or the right size. Similarly, it can take time and effort for a worker to find a suitable job with suitable pay, and for employers to receive and evaluate applications for job openings. Search theory explores the workings of markets once facts such as these are incorporated into the analysis. Adequate analysis of market frictions needs to consider how reactions to frictions change the overall economic environment: not only do frictions change incentives for buyers and sellers, but the responses to the changed incentives also alter the economic environment for all the participants in the market. Because of these feedback effects, seemingly small frictions can have large effects on outcomes. Equilibrium search theory is the development of basic models to permit analysis of economic outcomes when specific frictions are incorporated into simpler market models. The primary friction addressed by search theory is the need to spend time and effort to learn about opportunities—opportunities to buy or to sell, to hire or to be hired. There are many aspects of a job and of a worker that matter when deciding whether a particular match is worthwhile. Such frictions are naturally analyzed in models that consider a process over time—of workers seeking jobs, firms seeking employees, borrowers seeking lenders, and shoppers buying items that are not part of frequent shopping. Search theory models have altered the way we think about markets, how we interpret market data, and how we think about government policies. The complexity of the economy calls for the use of multiple models that address different aspects of the determinants of unemployment (and other) outcomes. This view was captured so well by Alfred Marshall (1890: 1948 edition, p. 366) that I have quoted this passage repeatedly since coming upon it while doing research for the Churchill Lectures (Diamond 1994b).",
"title": ""
},
{
"docid": "1db450f3e28907d6940c87d828fc1566",
"text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.",
"title": ""
},
{
"docid": "bbf242fd4722abbba0bc993a636f50c2",
"text": "Since the publication of its first edition in 1995, Artificial Intelligence: A Modern Approach has become a classic in our field. Even researchers outside AI, working in other areas of computer science, are familiar with the text and have gained a better appreciation of our field thanks to the efforts of its authors, Stuart Russell of UC Berkeley and Peter Norvig of Google Inc. It has been adopted by over 1000 universities in over 100 countries, and has provided an excellent introduction to AI to several hundreds of thousands of students worldwide. The book not only stands out in the way it provides a clear and comprehensive introduction to almost all aspects of AI for a student entering our field, it also provides a tremendous resource for experienced AI researchers interested in a good introduction to subfields of AI outside of their own area of specialization. In fact, many researchers enjoy reading insightful descriptions of their own area, combined, of course, with the tense moment of checking the author index to see whether their own work made it into the book. Fortunately, due in part to the comprehensive nature of the text, almost all AI researchers who have been around for a few years can be proud to see their own work cited. Writing such a high-quality and erudite overview of our field, while distilling key aspects of literally thousands of research papers, is a daunting task that requires a unique talent; the Russell and Norvig author team has clearly handled this challenge exceptionally well. Given the impact of the first edition of the book, a challenge for the authors was to keep such a unique text up-to-date in the face of rapid developments in AI over the past decade and a half. Fortunately, the authors have succeeded admirably in this challenge by bringing out a second edition in 2003 and now a third edition in 2010. Each of these new editions involves major rewrites and additions to the book to keep it fully current. The revisions also provide an insightful overview of the evolution of AI in recent years. The text covers essentially all major areas of AI, while providing ample and balanced coverage of each of the subareas. For certain subfields, part of the text was provided by respective subject experts. In particular, Jitendra Malik and David Forsyth contributed the chapter on computer vision, Sebastian Thrun wrote the chapter on robotics, and Vibhu Mittal helped with the chapter on natural language. Nick Hay, Mehran Sahami, and Ernest Davis contributed to the engaging set of exercises for students. Overall, this book brings together deep knowledge of various facets of AI, from the authors as well as from many experts in various subfields. The topics covered in the book are woven together via the theme of a grand challenge in AI — that of creating an intelligent agent, one that takes “the best possible [rational] action in [any] situation.” Every aspect of AI is considered in the context of such an agent. For instance, the book discusses agents that solve problems through search, planning, and reasoning, agents that are situated in the physical world, agents that learn from observations, agents that interact with the world through vision and perception, and agents that manipulate the physical world through",
"title": ""
},
{
"docid": "69eac200c7ef5e656e9fb28c13efa9b6",
"text": "A differential RF-DC CMOS converter for RF energy scavenging based on a reconfigurable voltage rectifier topology is presented. The converter efficiency and sensitivity are optimized thanks to the proposed reconfigurable architecture. Prototypes, realized in 130 nm, provide a regulated output voltage of ~2 V when working at 868 MHz, with a -21 dBm sensitivity. The circuit efficiency peaks at 60%, remaining above the 40% for a 18 dB input power range.",
"title": ""
},
{
"docid": "8da468bbb923b9d790e633c6a4fd9873",
"text": "Building Information Modeling (BIM) and Lean Thinking have been used separately as key approaches to overall construction projects’ improvement. Their combination, given several scenarios, presents opportunities for improvement as well as challenges in implementation. However, the exploration of eventual interactions and relationships between BIM as a process and Lean Construction principles is recent in research. The objective of this paper is to identify BIM and Lean relationship aspects with a focus on the construction phase and from the perspective of the general contractor (GC). This paper is based on a case study where BIM is already heavily used by the GC and where the integration of Lean practices is recent. We explore areas of improvement and Lean contributions to BIM from two perspectives. First, from Sacks et al.’s (2010) Interaction Matrix perspective, we identify some existing interactions. Second, based on the Capability Maturity Model (CMM) of the National Building Information Modeling Standard (NBIMS), we measure the level of the project’s BIM maturity and highlight areas of improvement for Lean. The main contribution of the paper is concerned with the exploration of the BIM maturity levels that are enhanced by Lean implementation.",
"title": ""
},
{
"docid": "59e29fa12539757b5084cab8f1e1b292",
"text": "This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented.",
"title": ""
},
{
"docid": "470db66b9bcff16a9a559810ce352dfa",
"text": "Abstract The state of security on the Internet is poor and progress toward increased protection is slow. This has given rise to a class of action referred to as “Ethical Hacking”. Companies are releasing software with little or no testing and no formal verification and expecting consumers to debug their product for them. For dot.com companies time-to-market is vital, security is not perceived as a marketing advantage, and implementing a secure design process an expensive sunk expense such that there is no economic incentive to produce bug-free software. There are even legislative initiatives to release software manufacturers from legal responsibility to their defective software.",
"title": ""
},
{
"docid": "68c1aa2e3d476f1f24064ed6f0f07fb7",
"text": "Granuloma annulare is a benign, asymptomatic, self-limited papular eruption found in patients of all ages. The primary skin lesion usually is grouped papules in an enlarging annular shape, with color ranging from flesh-colored to erythematous. The two most common types of granuloma annulare are localized, which typically is found on the lateral or dorsal surfaces of the hands and feet; and disseminated, which is widespread. Localized disease generally is self-limited and resolves within one to two years, whereas disseminated disease lasts longer. Because localized granuloma annulare is self-limited, no treatment other than reassurance may be necessary. There are no well-designed randomized controlled trials of the treatment of granuloma annulare. Treatment recommendations are based on the pathophysiology of the disease, expert opinion, and case reports only. Liquid nitrogen, injected steroids, or topical steroids under occlusion have been recommended for treatment of localized disease. Disseminated granuloma annulare may be treated with one of several systemic therapies such as dapsone, retinoids, niacinamide, antimalarials, psoralen plus ultraviolet A therapy, fumaric acid esters, tacrolimus, and pimecrolimus. Consultation with a dermatologist is recommended because of the possible toxicities of these agents.",
"title": ""
},
{
"docid": "702f368c8ea8313e661a3b731cec3eba",
"text": "This paper develops a new framework for explaining the dynamic aspects of business models in value webs. As companies move from research to roll-out and maturity three forces cause changes in business models. The technological forces are most important in the first phase, regulation in the second phase, and markets in the third. The forces cause change through influence on the technology, services, finances, and organizational network of the firm. As a result, partners in value webs will differ across these phases. A case study of NTT DoCoMo’s i-mode illustrates the framework.",
"title": ""
},
{
"docid": "da3b0751e3bcbc77959fefd6b056b0c6",
"text": "BACKGROUND\nPatients with complex care needs who require care across different health care settings are vulnerable to experiencing serious quality problems. A care transitions intervention designed to encourage patients and their caregivers to assert a more active role during care transitions may reduce rehospitalization rates.\n\n\nMETHODS\nRandomized controlled trial. Between September 1, 2002, and August 31, 2003, patients were identified at the time of hospitalization and were randomized to receive the intervention or usual care. The setting was a large integrated delivery system located in Colorado. Subjects (N = 750) included community-dwelling adults 65 years or older admitted to the study hospital with 1 of 11 selected conditions. Intervention patients received (1) tools to promote cross-site communication, (2) encouragement to take a more active role in their care and to assert their preferences, and (3) continuity across settings and guidance from a \"transition coach.\" Rates of rehospitalization were measured at 30, 90, and 180 days.\n\n\nRESULTS\nIntervention patients had lower rehospitalization rates at 30 days (8.3 vs 11.9, P = .048) and at 90 days (16.7 vs 22.5, P = .04) than control subjects. Intervention patients had lower rehospitalization rates for the same condition that precipitated the index hospitalization at 90 days (5.3 vs 9.8, P = .04) and at 180 days (8.6 vs 13.9, P = .046) than controls. The mean hospital costs were lower for intervention patients ($2058) vs controls ($2546) at 180 days (log-transformed P = .049).\n\n\nCONCLUSION\nCoaching chronically ill older patients and their caregivers to ensure that their needs are met during care transitions may reduce the rates of subsequent rehospitalization.",
"title": ""
},
{
"docid": "de638a90e5a6ef3bf030d998b0e921a3",
"text": "The quantization techniques have shown competitive performance in approximate nearest neighbor search. The state-of-the-art algorithm, composite quantization, takes advantage of the compositionabity, i.e., the vector approximation accuracy, as opposed to product quantization and Cartesian k-means. However, we have observed that the runtime cost of computing the distance table in composite quantization, which is used as a lookup table for fast distance computation, becomes nonnegligible in real applications, e.g., reordering the candidates retrieved from the inverted index when handling very large scale databases. To address this problem, we develop a novel approach, called sparse composite quantization, which constructs sparse dictionaries. The benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot. Experiment results on large scale ANN retrieval tasks (1M SIFTs and 1B SIFTs) and applications to object retrieval show that the proposed approach yields competitive performance: superior search accuracy to product quantization and Cartesian k-means with almost the same computing cost, and much faster ANN search than composite quantization with the same level of accuracy.",
"title": ""
},
{
"docid": "70e6ce1ae00e6a6ed9af1f62f9764150",
"text": "Citations play a pivotal role in indicating various aspects of scientific literature. Quantitative citation analysis approaches have been used over the decades to measure the impact factor of journals, to rank researchers or institutions, to discover evolving research topics etc. Researchers doubted the pure quantitative citation analysis approaches and argued that all citations are not equally important; citation reasons must be considered while counting. In the recent past, researchers have focused on identifying important citation reasons by classifying them into important and non-important classes rather than individually classifying each reason. Most of contemporary citation classification techniques either rely on full content of articles, or they are dominated by content based features. However, most of the time content is not freely available as various journal publishers do not provide open access to articles. This paper presents a binary citation classification scheme, which is dominated by metadata based parameters. The study demonstrates the significance of metadata and content based parameters in varying scenarios. The experiments are performed on two annotated data sets, which are evaluated by employing SVM, KLR, Random Forest machine learning classifiers. The results are compared with the contemporary study that has performed similar classification employing rich list of content-based features. The results of comparisons revealed that the proposed model has attained improved value of precision (i.e., 0.68) just by relying on freely available metadata. We claim that the proposed approach can serve as the best alternative in the scenarios wherein content in unavailable.",
"title": ""
},
{
"docid": "b0accba2373a30c74dfade4bc616d0d2",
"text": "Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) — a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.",
"title": ""
},
{
"docid": "9bb4934d1d7b5ea27b251e4153b4fbc7",
"text": "In the usual setting of Machine Learning, classifiers are typically evaluated by estimating their error rate (or equi valently, the classification accuracy) on the test data. However, this mak es sense only if all errors have equal (uniform) costs. When the costs of errors differ between each other, the classifiers should be eva luated by comparing the total costs of the errors. Classifiers are typically designed to minimize the number of errors (incorrect classifications) made. When misclassification c sts vary between classes, this approach is not suitable. In this case the total misclassification cost should be minimized. In Machine Learning, only little work for dealing with nonuniform misclassification costs has been done. This paper pr esents a few different approaches for cost-sensitive modifications f the backpropagation learning algorithm for multilayered feedforw a d neural networks . The described approaches are thoroughly tested a nd evaluated on several standard benchmark domains.",
"title": ""
},
{
"docid": "c745d38b84f26933de9b48b6954a259d",
"text": "Pervasive wireless multimedia applications often require Personal Digital Assistants (PDAs) for processing and playback. The capability of PDAs, however, are generally much lower than desktop PCs. When these devices are used to play back video delivered over a network from a desktop server, their buffers can easily overflow, seriously degrading the video quality. In this paper, we report our implementation of some special stream processing techniques to deal with the capability mismatch between a PC and PDAs for low-delay live video streaming. These techniques are, the Selective Packet Drop (SPD) algorithm, the Game API (GAPI) optimization and the speed adaptation algorithm. All of them can be easily implemented. We show that our system provides much better video quality than systems without our techniques.",
"title": ""
},
{
"docid": "20e105b3b8d4469b2ddc0dbbc2a64082",
"text": "For over a century, heme metabolism has been recognized to play a central role during intraerythrocytic infection by Plasmodium parasites, the causative agent of malaria. Parasites liberate vast quantities of potentially cytotoxic heme as a by-product of hemoglobin catabolism within the digestive vacuole, where heme is predominantly sequestered as inert crystalline hemozoin. Plasmodium spp. also utilize heme as a metabolic cofactor. Despite access to abundant host-derived heme, parasites paradoxically maintain a biosynthetic pathway. This pathway has been assumed to produce the heme incorporated into mitochondrial cytochromes that support electron transport. In this review, we assess our current understanding of the love-hate relationship between Plasmodium parasites and heme, we discuss recent studies that clarify several long-standing riddles about heme production and utilization by parasites, and we consider remaining challenges and opportunities for understanding and targeting heme metabolism within parasites.",
"title": ""
},
{
"docid": "ac9bfa64fa41d4f22fc3c45adaadb099",
"text": "Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.",
"title": ""
}
] |
scidocsrr
|
484ba951f3163ce642674e4b2b561ca6
|
State of the Art on 3D Reconstruction with RGB-D Cameras
|
[
{
"docid": "fb898ef1b13d68ca3b5973b77237de74",
"text": "We present a nonrigid alignment algorithm for aligning high-resolution range data in the presence of low-frequency deformations, such as those caused by scanner calibration error. Traditional iterative closest points (ICP) algorithms, which rely on rigid-body alignment, fail in these cases because the error appears as a nonrigid warp in the data. Our algorithm combines the robustness and efficiency of ICP with the expressiveness of thin-plate splines to align high-resolution scanned data accurately, such as scans from the Digital Michelangelo Project [M. Levoy et al. (2000)]. This application is distinguished from previous uses of the thin-plate spline by the fact that the resolution and size of warping are several orders of magnitude smaller than the extent of the mesh, thus requiring especially precise feature correspondence.",
"title": ""
},
{
"docid": "e5f30c0d2c25b6b90c136d1c84ba8a75",
"text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.",
"title": ""
},
{
"docid": "d0ed732b324f8bfd12ad13ba45e73edc",
"text": "An intrinsic image is a decomposition of a photo into an illumination layer and a reflectance layer, which enables powerful editing such as the alteration of an object's material independently of its illumination. However, decomposing a single photo is highly under-constrained and existing methods require user assistance or handle only simple scenes. In this paper, we compute intrinsic decompositions using several images of the same scene under different viewpoints and lighting conditions. We use multi-view stereo to automatically reconstruct 3D points and normals from which we derive relationships between reflectance values at different locations, across multiple views and consequently different lighting conditions. We use robust estimation to reliably identify reflectance ratios between pairs of points. From these, we infer constraints for our optimization and enforce a coherent solution across multiple views and illuminations. Our results demonstrate that this constrained optimization yields high-quality and coherent intrinsic decompositions of complex scenes. We illustrate how these decompositions can be used for image-based illumination transfer and transitions between views with consistent lighting.",
"title": ""
},
{
"docid": "8e6debae3b3d3394e87e671a14f8819e",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
}
] |
[
{
"docid": "587ca964abb5708c896e2e4475116a6d",
"text": "The design and implementation of software for medical devices is challenging due to the closed-loop interaction with the patient, which is a stochastic physical environment. The safety-critical nature and the lack of existing industry standards for verification make this an ideal domain for exploring applications of formal modeling and closed-loop analysis. The biggest challenge is that the environment model(s) have to be both complex enough to express the physiological requirements and general enough to cover all possible inputs to the device. In this effort, we use a dual chamber implantable pacemaker as a case study to demonstrate verification of software specifications of medical devices as timed-automata models in UPPAAL. The pacemaker model is based on the specifications and algorithm descriptions from Boston Scientific. The heart is modeled using timed automata based on the physiology of heart. The model is gradually abstracted with timed simulation to preserve properties. A manual Counter-Example-Guided Abstraction and Refinement (CEGAR) framework has been adapted to refine the heart model when spurious counter-examples are found. To demonstrate the closed-loop nature of the problem and heart model refinement, we investigated two clinical cases of Pacemaker Mediated Tachycardia and verified their corresponding correction algorithms in the pacemaker. Along with our tools for code generation from UPPAAL models, this effort enables model-driven design and certification of software for medical devices.",
"title": ""
},
{
"docid": "9a9be12c677356314b8466b430b83546",
"text": "Reality-based modeling of vibrations has been used to enhance the haptic display of virtual environments for impact events such as tapping, although the bandwidths of many haptic displays make it difficult to accurately replicate the measured vibrations. We propose modifying reality-based vibration parameters through a series of perceptual experiments with a haptic display. We created a vibration feedback model, a decaying sinusoidal waveform, by measuring the acceleration of the stylus of a three degree-of-freedom haptic display as a human user tapped it on several real materials. For some materials, the measured parameters (amplitude, frequency, and decay rate) were greater than the bandwidth of the haptic display; therefore, the haptic device was not capable of actively displaying all the vibration models. A series of perceptual experiments, where human users rated the realism of various parameter combinations, were performed to further enhance the realism of the vibration display for impact events given these limitations. The results provided different parameters than those derived strictly from acceleration data. Additional experiments verified the effectiveness of these modified model parameters by showing that users could differentiate between materials in a virtual environment.",
"title": ""
},
{
"docid": "885bf946dbbfc462cd066794fe486da3",
"text": "Efficient implementation of block cipher is important on the way to achieving high efficiency with good understand ability. Numerous number of block cipher including Advance Encryption Standard have been implemented using different platform. However the understanding of the AES algorithm step by step is very complicated. This paper presents the implementation of AES algorithm and explains Avalanche effect with the help of Avalanche test result. For this purpose we use Xilinx ISE 9.1i platform in Algorithm development and ModelSim SE 6.3f platform for results confirmation and computation.",
"title": ""
},
{
"docid": "a549abeda438ce7ce001854aadb63d81",
"text": "Cyberbullying is a phenomenon which negatively affects the individuals, the victims suffer from various mental issues, ranging from depression, loneliness, anxiety to low self-esteem. In parallel with the pervasive use of social media, cyberbullying is becoming more and more prevalent. Traditional mechanisms to fight against cyberbullying include the use of standards and guidelines, human moderators, and blacklists based on the profane words. However, these mechanisms fall short in social media and cannot scale well. Therefore, it is necessary to develop a principled learning framework to automatically detect cyberbullying behaviors. However, it is a challenging task due to short, noisy and unstructured content information and intentional obfuscation of the abusive words or phrases by social media users. Motivated by sociological and psychological findings on bullying behaviors and the correlation with emotions, we propose to leverage sentiment information to detect cyberbullying behaviors in social media by proposing a sentiment informed cyberbullying detection framework. Experimental results on two realworld, publicly available social media datasets show the superiority of the proposed framework. Further studies validate the effectiveness of leveraging sentiment information for cyberbullying detection.",
"title": ""
},
{
"docid": "297a61a2c04c8553da9168d0f72a1d64",
"text": "CONTEXT\nSelf-myofascial release (SMR) is a technique used to treat myofascial restrictions and restore soft-tissue extensibility.\n\n\nPURPOSE\nTo determine whether the pressure and contact area on the lateral thigh differ between a Multilevel rigid roller (MRR) and a Bio-Foam roller (BFR) for participants performing SMR.\n\n\nPARTICIPANTS\nTen healthy young men and women.\n\n\nMETHODS\nParticipants performed an SMR technique on the lateral thigh using both myofascial rollers. Thin-film pressure sensels recorded pressure and contact area during each SMR trial.\n\n\nRESULTS\nMean sensel pressure exerted on the soft tissue of the lateral thigh by the MRR (51.8 +/- 10.7 kPa) was significantly (P < .001) greater than that of the conventional BFR (33.4 +/- 6.4 kPa). Mean contact area of the MRR (47.0 +/- 16.1 cm2) was significantly (P < .005) less than that of the BFR (68.4 +/- 25.3 cm2).\n\n\nCONCLUSION\nThe significantly higher pressure and isolated contact area with the MRR suggest a potential benefit in SMR.",
"title": ""
},
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "bc08376826b93626f43168bc35b09346",
"text": "Powered lower limb exoskeletons become more and more common and are used for different applications such as rehabilitation and human strength augmentation. However, many of the existing designs are limited to a very specific task. We present the concept and design of a modular, reconfigurable lower limb exoskeleton that can be adapted with respect to it's kinematics and actuation to a wide range of users and applications. The introduced modular design allows to compose lower limb exoskeletons with up to four degrees of freedom per leg: abduction/adduction of the hip and flexion/extension of the hip, knee, and ankle. Each passive joint can be extended by a module in terms of functionality, e.g. by actuation or spring stabilization. Three different modules are introduced: one actuator for active support and two spring modules that stabilize the joint around an equilibrium position. Using the presented components, three exemplary systems are composed and discussed: An active hip support exoskeleton, a fully articulated lower limb exoskeleton for gait support and a system for treadmill based gait rehabilitation. The three designs show that the concept of a modular exoskeleton is feasible and that it can be an alternative to a specialized system for environments with changing requirements, for example in rehabilitation or research.",
"title": ""
},
{
"docid": "4f846635e4f23b7630d0c853559f71dc",
"text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.",
"title": ""
},
{
"docid": "04b66d9285404e7fb14fcec3cd66316a",
"text": "Amazon Aurora is a relational database service for OLTP workloads offered as part of Amazon Web Services (AWS). In this paper, we describe the architecture of Aurora and the design considerations leading to that architecture. We believe the central constraint in high throughput data processing has moved from compute and storage to the network. Aurora brings a novel architecture to the relational database to address this constraint, most notably by pushing redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. We describe how doing so not only reduces network traffic, but also allows for fast crash recovery, failovers to replicas without loss of data, and fault-tolerant, self-healing storage. We then describe how Aurora achieves consensus on durable state across numerous storage nodes using an efficient asynchronous scheme, avoiding expensive and chatty recovery protocols. Finally, having operated Aurora as a production service for over 18 months, we share the lessons we have learnt from our customers on what modern cloud applications expect from databases.",
"title": ""
},
{
"docid": "2650ec74eb9b8c368f213212218989ea",
"text": "Illumina-based next generation sequencing (NGS) has accelerated biomedical discovery through its ability to generate thousands of gigabases of sequencing output per run at a fraction of the time and cost of conventional technologies. The process typically involves four basic steps: library preparation, cluster generation, sequencing, and data analysis. In 2015, a new chemistry of cluster generation was introduced in the newer Illumina machines (HiSeq 3000/4000/X Ten) called exclusion amplification (ExAmp), which was a fundamental shift from the earlier method of random cluster generation by bridge amplification on a non-patterned flow cell. The ExAmp peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/125724 doi: bioRxiv preprint first posted online Apr. 9, 2017;",
"title": ""
},
{
"docid": "0f2021268693abc34ac6ca7ddcc12534",
"text": "The purpose of this article is to discuss the scope and functionality of a versatile environment for testing small- and large-scale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools can be obtained by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognizing that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools provide a link between the SIF and a number of existing packages, including MINOS and OSL. Additionally, as each problem includes a specific classification that is designed to be useful in identifying particular classes of problems, facilities are provided to build and manage a database of this information. There is a Unix and C shell bias to many of the descriptions in the article, since, for the sake of simplicity, we do not illustrate everything in its fullest generality. We trust that the majority of potential users are sufficiently familiar with Unix that these examples will not lead to undue confusion.",
"title": ""
},
{
"docid": "17b68f3275ce077e6c4e9f4c0006c43c",
"text": "A compact folded dipole antenna for millimeter-wave (MMW) energy harvesting is proposed in this paper. The antenna consists of two folded arms excited by a coplanar stripline (CPS). A coplanar waveguide (CPW) to coplanar stripline (CPS) transformer is introduced for wide band operation. The antenna radiates from 33 GHz to 41 GHz with fractional bandwidth about 21.6%. The proposed antenna shows good radiation characteristics and low VSWR, lower than 2, as well as average antenna gain is around 5 dBi over the whole frequency range. The proposed dipole antenna shows about 49% length reduction. The simulated results using both Ansoft HFSS and CST Microwave Studio show a very good agreement between them.",
"title": ""
},
{
"docid": "447b61671cf5e6762e56ab5561983842",
"text": "Biological phosphorous (P) and nitrogen (N) removal from municipal wastewater was studied using an innovative anoxic-aerobic-anaerobic side-stream treatment system. The impact of influent water quality including chemical oxygen demand (COD), ammonium and orthophosphate concentrations on the reactor performance was evaluated. The results showed the system was very effective at removing both COD (>88%) and NH4+-N (>96%) despite varying influent concentrations of COD, NH4+-N, and total PO43--P. In contrast, it was found that the removal of P was sensitive to influent NH4+-N and PO43--P concentrations. The maximum PO43--P removal of 79% was achieved with the lowest influent NH4+-N and PO43--P concentration. Quantitative PCR (qPCR) assays showed a high abundance and diversity of phosphate accumulating organisms (PAO), nitrifiers and denitrifiers. The MiSeq microbial community structure analysis showed that the Proteobacteria (especially β-Proteobacteria, and γ-Proteobacteria) were the dominant in all reactors. Further analysis of the bacteria indicated the presence of diverse PAO genera including Candidatus Accumulibacter phosphatis, Tetrasphaera, and Rhodocyclus, and the denitrifying PAO (DPAO) genus Dechloromonas. Interestingly, no glycogen accumulating organisms (GAOs) were detected in any of the reactors, suggesting the advantage of proposed process in term of PAO selection for enhanced P removal compared with conventional main-stream processes.",
"title": ""
},
{
"docid": "7fef9bfd0e71a08d5574affb91d0c9ed",
"text": "This paper presents a novel 3D indoor Laser-aided Inertial Navigation System (L-INS) for the visually impaired. An Extended Kalman Filter (EKF) fuses information from an Inertial Measurement Unit (IMU) and a 2D laser scanner, to concurrently estimate the six degree-of-freedom (d.o.f.) position and orientation (pose) of the person and a 3D map of the environment. The IMU measurements are integrated to obtain pose estimates, which are subsequently corrected using line-to-plane correspondences between linear segments in the laser-scan data and orthogonal structural planes of the building. Exploiting the orthogonal building planes ensures fast and efficient initialization and estimation of the map features while providing human-interpretable layout of the environment. The L-INS is experimentally validated by a person traversing a multistory building, and the results demonstrate the reliability and accuracy of the proposed method for indoor localization and mapping.",
"title": ""
},
{
"docid": "0b357696dd2b68a7cef39695110e4e1b",
"text": "Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.",
"title": ""
},
{
"docid": "1488c4ad77f042cbc67aa1681fca8d7e",
"text": "Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html.",
"title": ""
},
{
"docid": "95c634481e8c4468483ef447676098b6",
"text": "The success of cancer immunotherapy has generated tremendous interest in identifying new immunotherapeutic targets. To date, the majority of therapies have focussed on stimulating the adaptive immune system to attack cancer, including agents targeting CTLA-4 and the PD-1/PD-L1 axis. However, macrophages and other myeloid immune cells offer much promise as effectors of cancer immunotherapy. The CD47/signal regulatory protein alpha (SIRPα) axis is a critical regulator of myeloid cell activation and serves a broader role as a myeloid-specific immune checkpoint. CD47 is highly expressed on many different types of cancer, and it transduces inhibitory signals through SIRPα on macrophages and other myeloid cells. In a diverse range of preclinical models, therapies that block the CD47/SIRPα axis stimulate phagocytosis of cancer cells in vitro and anti-tumour immune responses in vivo. A number of therapeutics that target the CD47/SIRPα axis are under preclinical and clinical investigation. These include anti-CD47 antibodies, engineered receptor decoys, anti-SIRPα antibodies and bispecific agents. These therapeutics differ in their pharmacodynamic, pharmacokinetic and toxicological properties. Clinical trials are underway for both solid and haematologic malignancies using anti-CD47 antibodies and recombinant SIRPα proteins. Since the CD47/SIRPα axis also limits the efficacy of tumour-opsonising antibodies, additional trials will examine their potential synergy with agents such as rituximab, cetuximab and trastuzumab. Phagocytosis in response to CD47/SIRPα-blocking agents results in antigen uptake and presentation, thereby linking the innate and adaptive immune systems. CD47/SIRPα blocking therapies may therefore synergise with immune checkpoint inhibitors that target the adaptive immune system. As a critical regulator of macrophage phagocytosis and activation, the potential applications of CD47/SIRPα blocking therapies extend beyond human cancer. They may be useful for the treatment of infectious disease, conditioning for stem cell transplant, and many other clinical indications.",
"title": ""
},
{
"docid": "f70b6d0a0b315a1ca87ccf5184c43da4",
"text": "Transmitting secret information through internet requires more security because of interception and improper manipulation by eavesdropper. One of the most desirable explications of this is “Steganography”. This paper proposes a technique of steganography using Advanced Encryption Standard (AES) with secured hash function in the blue channel of image. The embedding system is done by dynamic bit adjusting system in blue channel of RGB images. It embeds message bits to deeper into the image intensity which is very difficult for any type improper manipulation of hackers. Before embedding text is encrypted using AES with a hash function. For extraction the cipher text bit is found from image intensity using the bit adjusting extraction algorithm and then it is decrypted by AES with same hash function to get the real secret text. The proposed approach is better in Pick Signal to Noise Ratio (PSNR) value and less in histogram error between stego images and cover images than some existing systems. KeywordsAES-128, SHA-512, Cover Image, Stego image, Bit Adjusting, Blue Channel",
"title": ""
},
{
"docid": "6d1c4530ba67b931729d9773debabb65",
"text": "This paper explores the idea that the universe is a virtual reality created by information processing, and relates this strange idea to the findings of modern physics about the physical world. The virtual reality concept is familiar to us from online worlds, but the world as a virtual reality is usually a subject for science fiction rather than science. Yet logically the world could be an information simulation running on a three-dimensional space-time screen. Indeed, that the essence of the universe is information has advantages, e.g. if matter, charge, energy and movement are aspects of information, the many conservation laws could become a single law of information conservation. If the universe were a virtual reality, its creation at the big bang would no longer be paradoxical, as every virtual system must be booted up. It is suggested that whether the world is an objective or a virtual reality is a matter for science to resolve. Modern computer science can help suggest a model that derives core physical properties like space, time, light, matter and movement from information processing. Such an approach could reconcile relativity and quantum theories, with the former being how information processing creates space-time, and the latter how it creates energy and matter.",
"title": ""
},
{
"docid": "b3ae663766c408feae5087ceef9916df",
"text": "High Efficiency Video Coding, the latest video standard, uses larger and variable-sized coding units and longer interpolation filters than H.264/AVC to better exploit redundancy in video signals. These algorithmic techniques enable a 50% decrease in bitrate at the cost of computational complexity, external memory bandwidth, and, for ASIC implementations, on-chip SRAM of the video codec. This paper describes architectural optimizations for an HEVC video decoder chip. The chip uses a two-stage subpipelining scheme to reduce on-chip SRAM by 56 kbytes-a 32% reduction. A high-throughput read-only cache combined with DRAM-latency-aware memory mapping reduces DRAM bandwidth by 67%. The chip is built for HEVC Working Draft 4 Low Complexity configuration and occupies 1.77 mm2 in 40-nm CMOS. It performs 4K Ultra HD 30-fps video decoding at 200 MHz while consuming 1.19 nJ/pixel of normalized system power.",
"title": ""
}
] |
scidocsrr
|
133e6c414ef8cfb4ad5096082e2cf8d2
|
5G Backhaul Challenges and Emerging Research Directions: A Survey
|
[
{
"docid": "121f1baeaba51ebfdfc69dde5cd06ce3",
"text": "Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.",
"title": ""
},
{
"docid": "471c52fca57c672267ef69e3e3db9cd9",
"text": "This paper presents the approach of extending cellular networks with millimeter-wave backhaul and access links. Introducing a logical split between control and user plane will permit full coverage while seamlessly achieving very high data rates in the vicinity of mm-wave small cells.",
"title": ""
}
] |
[
{
"docid": "0d18f41db76330c5d9cdceb268ca3434",
"text": "A Low-power convolutional neural network (CNN)-based face recognition system is proposed for the user authentication in smart devices. The system consists of two chips: an always-on CMOS image sensor (CIS)-based face detector (FD) and a low-power CNN processor. For always-on FD, analog–digital Hybrid Haar-like FD is proposed to improve the energy efficiency of FD by 39%. For low-power CNN processing, the CNN processor with 1024 MAC units and 8192-bit-wide local distributed memory operates at near threshold voltage, 0.46 V with 5-MHz clock frequency. In addition, the separable filter approximation is adopted for the workload reduction of CNN, and transpose-read SRAM using 7T SRAM cell is proposed to reduce the activity factor of the data read operation. Implemented in 65-nm CMOS technology, the <inline-formula> <tex-math notation=\"LaTeX\">$3.30 \\times 3.36$ </tex-math></inline-formula> mm<sup>2</sup> CIS chip and the <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> mm<sup>2</sup> CNN processor consume 0.62 mW to evaluate one face at 1 fps and achieved 97% accuracy in LFW dataset.",
"title": ""
},
{
"docid": "e58036f93195603cb7dc7265b9adeb25",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "76c19c70f11244be16248a1b4de2355a",
"text": "We have recently witnessed the emerging of cloud computing on one hand and robotics platforms on the other hand. Naturally, these two visions have been merging to give birth to the Cloud Robotics paradigm in order to offer even more remote services. But such a vision is still in its infancy. Architectures and platforms are still to be defined to efficiently program robots so they can provide different services, in a standardized way masking their heterogeneity. This paper introduces Open Mobile Cloud Robotics Interface (OMCRI), a Robot-as-a-Service vision based platform, which offers a unified easy access to remote heterogeneous mobile robots. OMCRI encompasses an extension of the Open Cloud Computing Interface (OCCI) standard and a gateway hosting mobile robot resources. We then provide an implementation of OMCRI based on the open source model-driven Eclipse-based OCCIware tool chain and illustrates its use for three off-the-shelf mobile robots: Lego Mindstorm NXT, Turtlebot, and Parrot AR. Drone.",
"title": ""
},
{
"docid": "c5dfef21843d2cc1893ec1dc88787050",
"text": "Automatic synthesis of faces from visual attributes is an important problem in computer vision and has wide applications in law enforcement and entertainment. With the advent of deep generative convolutional neural networks (CNNs), attempts have been made to synthesize face images from attributes and text descriptions. In this paper, we take a different approach, where we formulate the original problem as a stage-wise learning problem. We first synthesize the facial sketch corresponding to the visual attributes and then we reconstruct the face image based on the synthesized sketch. The proposed Attribute2Sketch2Face framework, which is based on a combination of deep Conditional Variational Autoencoder (CVAE) and Generative Adversarial Networks (GANs), consists of three stages: (1) Synthesis of facial sketch from attributes using a CVAE architecture, (2) Enhancement of coarse sketches to produce sharper sketches using a GANbased framework, and (3) Synthesis of face from sketch using another GAN-based network. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed attributebased three stage face synthesis method.",
"title": ""
},
{
"docid": "7f7e0c3982ca660f5b4f7f22584576a5",
"text": "Cooperation and competition (jointly called “coopetition”) are two modes of interactions among a set of concurrent topics on social media. How do topics cooperate or compete with each other to gain public attention? Which topics tend to cooperate or compete with one another? Who plays the key role in coopetition-related interactions? We answer these intricate questions by proposing a visual analytics system that facilitates the in-depth analysis of topic coopetition on social media. We model the complex interactions among topics as a combination of carry-over, coopetition recruitment, and coopetition distraction effects. This model provides a close functional approximation of the coopetition process by depicting how different groups of influential users (i.e., “topic leaders”) affect coopetition. We also design EvoRiver, a time-based visualization, that allows users to explore coopetition-related interactions and to detect dynamically evolving patterns, as well as their major causes. We test our model and demonstrate the usefulness of our system based on two Twitter data sets (social topics data and business topics data).",
"title": ""
},
{
"docid": "8ccb8ba140fedc1eba8e97f3b7721373",
"text": "This paper describes mathematical and software developments for a suite of programs for solving ordinary differential equations in Matlab.",
"title": ""
},
{
"docid": "fb80c27ab2615373a316605082adadbb",
"text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.",
"title": ""
},
{
"docid": "609fa8716f97a1d30683997d778e4279",
"text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.",
"title": ""
},
{
"docid": "6c889bf25b3d4c1bd87b26c03c8b652c",
"text": "With popular microblogging services like Twitter, users are able to online share their real-time feelings in a more convenient way. The user generated data in Twitter is thus regarded as a resource providing individuals' spontaneous emotional information, and has attracted much attention of researchers. Prior work has measured the emotional expressions in users' tweets and then performed various analysis and learning. However, how to utilize those learned knowledge from the observed tweets and the context information to predict users' opinions toward specific topics they had not directly given yet, is a novel problem presenting both challenges and opportunities. In this paper, we mainly focus on solving this problem with a Social context and Topical context incorporated Matrix Factorization (ScTcMF) framework. The experimental results on a real-world Twitter data set show that this framework outperforms the state-of-the-art collaborative filtering methods, and demonstrate that both social context and topical context are effective in improving the user-topic opinion prediction performance.",
"title": ""
},
{
"docid": "c62742c65b105a83fa756af9b1a45a37",
"text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.",
"title": ""
},
{
"docid": "50b0ecff19de467ab8558134fb666a87",
"text": "Real-time video objects detection, tracking, and recognition are challenging issues due to the real-time processing requirements of the machine learning algorithms. In recent years, video processing is performed by deep learning (DL) based techniques that achieve higher accuracy but require higher computations cost. This paper presents a recent survey of the state-of-the-art DL platforms and architectures used for deep vision systems. It highlights the contributions and challenges from over numerous research studies. In particular, this paper first describes the architecture of various DL models such as AutoEncoders, deep Boltzmann machines, convolution neural networks, recurrent neural networks and deep residual learning. Next, deep real-time video objects detection, tracking and recognition studies are highlighted to illustrate the key trends in terms of cost of computation, number of layers and the accuracy of results. Finally, the paper discusses the challenges of applying DL for real-time video processing and draw some directions for the future of DL algorithms.",
"title": ""
},
{
"docid": "f6189455184135dfeff9cb2a85b9fef0",
"text": "Precise, successful in desire target, strong healthy and self loading image registration is critical task in the field of computer vision. The most require key steps of image alignment/ registration are: Feature matching, Feature detection, , derivation of transformation function based on corresponding features in images and reconstruction of images based on derived transformation function. This is also the aim of computer vision in many applications to achieve an optimal and accurate image, which depends on optimal features matching and detection. The investigation of this paper summarize the coincidence among five different methods for robust features/interest points (or landmarks) detector and indentify images which are (FAST), Speed Up Robust Features (SURF), (Eigen),( Harris) & Maximally Stable Extremal Regions ( MSER). This paper also focuses on the unique extraction from the images which can be used to perform good matching on different views of the images/objects/scenes.",
"title": ""
},
{
"docid": "e36e26f084c0f589e5d36bb2103106ff",
"text": "Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers [11, 33] have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters [11] and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.",
"title": ""
},
{
"docid": "4fb93b393abac7cf7da9799a01fa9bab",
"text": "The goal of text summarization is to reduce the size of the text while preserving its important information and overall meaning. With the availability of internet, data is growing leaps and bounds and it is practically impossible summarizing all this data manually. Automatic summarization can be classified as extractive and abstractive summarization. For abstractive summarization we need to understand the meaning of the text and then create a shorter version which best expresses the meaning, While in extractive summarization we select sentences from given data itself which contains maximum information and fuse those sentences to create an extractive summary. In this paper we tested all possible combinations of seven features and then reported the best one for particular document. We analyzed the results for all 10 documents taken from DUC 2002 dataset using ROUGE evaluation matrices.",
"title": ""
},
{
"docid": "7aed9eeb7a8e922f5ffc0e920dbaeb1e",
"text": "In 3 prior meta-analyses, the relationship between the Big Five factors of personality and job criteria was investigated. However, these meta-analyses showed different findings. Furthermore, these reviews included studies carried out only in the United States and Canada. This study reports meta-analytic research on the same topic but with studies conducted in the European Community, which were not included in the prior reviews. The results indicate that Conscientiousness and Emotional Stability are valid predictors across job criteria and occupational groups. The remaining factors are valid only for some criteria and for some occupational groups. Extraversion was a predictor for 2 occupations, and Openness and Agreeableness were valid predictors of training proficiency. These findings are consistent with M.R. Barrick and M.K. Mount (1991) and L.M. Hough, N.K. Eaton, M.D. Dunnette, J.D. Kamp, and R.A. McCloy (1990). Implications of the results for future research and the practice of personnel selection are suggested.",
"title": ""
},
{
"docid": "d4075ad1c75e73c8e38bc139ecacac27",
"text": "Manifold bootstrapping is a new method for data-driven modeling of real-world, spatially-varying reflectance, based on the idea that reflectance over a given material sample forms a low-dimensional manifold. It provides a high-resolution result in both the spatial and angular domains by decomposing reflectance measurement into two lower-dimensional phases. The first acquires representatives of high angular dimension but sampled sparsely over the surface, while the second acquires keys of low angular dimension but sampled densely over the surface.\n We develop a hand-held, high-speed BRDF capturing device for phase one measurements. A condenser-based optical setup collects a dense hemisphere of rays emanating from a single point on the target sample as it is manually scanned over it, yielding 10 BRDF point measurements per second. Lighting directions from 6 LEDs are applied at each measurement; these are amplified to a full 4D BRDF using the general (NDF-tabulated) microfacet model. The second phase captures N=20-200 images of the entire sample from a fixed view and lit by a varying area source. We show that the resulting N-dimensional keys capture much of the distance information in the original BRDF space, so that they effectively discriminate among representatives, though they lack sufficient angular detail to reconstruct the SVBRDF by themselves. At each surface position, a local linear combination of a small number of neighboring representatives is computed to match each key, yielding a high-resolution SVBRDF. A quick capture session (10-20 minutes) on simple devices yields results showing sharp and anisotropic specularity and rich spatial detail.",
"title": ""
},
{
"docid": "ff0c99e547d41fbc71ba1d4ac4a17411",
"text": "Measuring similarities between unlabeled time series trajectories is an important problem in domains as diverse as medicine, astronomy, finance, and computer vision. It is often unclear what is the appropriate metric to use because of the complex nature of noise in the trajectories (e.g. different sampling rates or outliers). Domain experts typically hand-craft or manually select a specific metric, such as dynamic time warping (DTW), to apply on their data. In this paper, we propose Autowarp, an end-to-end algorithm that optimizes and learns a good metric given unlabeled trajectories. We define a flexible and differentiable family of warping metrics, which encompasses common metrics such as DTW, Euclidean, and edit distance. Autowarp then leverages the representation power of sequence autoencoders to optimize for a member of this warping distance family. The output is a metric which is easy to interpret and can be robustly learned from relatively few trajectories. In systematic experiments across different domains, we show that Autowarp often outperforms hand-crafted trajectory similarity metrics.",
"title": ""
},
{
"docid": "7f8211ed8d7c8145f370c46b5bba3ddb",
"text": "The adjectives of quantity (Q-adjectives) many, few, much and little stand out from other quantity expressions on account of their syntactic flexibility, occurring in positions that could be called quantificational (many students attended), predicative (John’s friends were many), attributive (the many students), differential (much more than a liter) and adverbial (slept too much). This broad distribution poses a challenge for the two leading theories of this class, which treat them as either quantifying determiners or predicates over individuals. This paper develops an analysis of Q-adjectives as gradable predicates of sets of degrees or (equivalently) gradable quantifiers over degrees. It is shown that this proposal allows a unified analysis of these items across the positions in which they occur, while also overcoming several issues facing competing accounts, among others the divergences between Q-adjectives and ‘ordinary’ adjectives, the operator-like behavior of few and little, and the use of much as a dummy element. Overall the findings point to the central role of degrees in the semantics of quantity.",
"title": ""
},
{
"docid": "83cea367e54cfe92718742cacbd61adf",
"text": "We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the networks process and classify text. We examine common hypotheses to this problem: that filters, accompanied by global max-pooling, serve as ngram detectors. We show that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important ngrams from the rest. Finally, we show practical use cases derived from our findings in the form of model interpretability (explaining a trained model by deriving a concrete identity for each filter, bridging the gap between visualization tools in vision tasks and NLP) and prediction interpretability (explaining predictions).",
"title": ""
},
{
"docid": "82bcf95fc94ba1369c6ec1c64f55b2ec",
"text": "In this paper, we are interested in modeling complex activities that occur in a typical household. We propose to use programs, i.e., sequences of atomic actions and interactions, as a high level representation of complex tasks. Programs are interesting because they provide a non-ambiguous representation of a task, and allow agents to execute them. However, nowadays, there is no database providing this type of information. Towards this goal, we first crowd-source programs for a variety of activities that happen in people's homes, via a game-like interface used for teaching kids how to code. Using the collected dataset, we show how we can learn to extract programs directly from natural language descriptions or from videos. We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to \"drive\" an artificial agent to execute tasks in a simulated household environment. Our VirtualHome simulator allows us to create a large activity video dataset with rich ground-truth, enabling training and testing of video understanding models. We further showcase examples of our agent performing tasks in our VirtualHome based on language descriptions.",
"title": ""
}
] |
scidocsrr
|
49d1792f781083373492a1b18b4396ff
|
From Contract Drafting to Software Specification: Linguistic Sources of Ambiguity
|
[
{
"docid": "c590b5f84b08720b36622a0256565613",
"text": "Attempto Controlled English (ACE) allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specification texts in ACE into discourse representation structures and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.",
"title": ""
}
] |
[
{
"docid": "4b9695da76b4ab77139549a4b444dae7",
"text": "Wireless Sensor Network (WSN) is one of the key technologies of 21st century, while it is a very active and challenging research area. It seems that in the next coming year, thanks to 6LoWPAN, these wireless micro-sensors will be embedded in everywhere, because 6LoWPAN enables P2P connection between wireless nodes over IPv6. Nowadays different implementations of 6LoWPAN stacks are available so it is interesting to evaluate their performance in term of memory footprint and compliant with the RFC4919 and RFC4944. In this paper, we present a survey on the state-of-art of the current implementation of 6LoWPAN stacks such as uIP/Contiki, SICSlowpan, 6lowpancli, B6LoWPAN, BLIP, NanoStack and Jennic's stack. The key features of all these 6LoWPAN stacks will be established. Finally, we discuss the evolution of the current implementations of 6LoWPAN stacks.",
"title": ""
},
{
"docid": "fcaf6f7a675cae064d6cd23e291560d1",
"text": "Many novice programmers view programming tools as all-knowing, infallible authorities about what is right and wrong about code. This misconception is particularly detrimental to beginners, who may view the cold, terse, and often judgmental errors from compilers as a sign of personal failure. It is possible, however, that attributing this failure to the computer, rather than the learner, may improve learners' motivation to program. To test this hypothesis, we present Gidget, a game where the eponymous robot protagonist is cast as a fallible character that blames itself for not being able to correctly write code to complete its missions. Players learn programming by working with Gidget to debug its problematic code. In a two-condition controlled experiment, we manipulated Gidget's level of personification in: communication style, sound effects, and image. We tested our game with 116 self-described novice programmers recruited on Amazon's Mechanical Turk and found that, when given the option to quit at any time, those in the experimental condition (with a personable Gidget) completed significantly more levels in a similar amount of time. Participants in the control and experimental groups played the game for an average time of 39.4 minutes (SD=34.3) and 50.1 minutes (SD=42.6) respectively. These finding suggest that how programming tool feedback is portrayed to learners can have a significant impact on motivation to program and learning success.",
"title": ""
},
{
"docid": "0c5f30cd0e072309b13cc6c43bb12647",
"text": "In this paper, we compare the performance of different approaches to predicting delays in air traffic networks. We consider three classes of models: A recently-developed aggregate model of the delay network dynamics, which we will refer to as the Markov Jump Linear System (MJLS), classical machine learning techniques like Classification and Regression Trees (CART), and three candidate Artificial Neural Network (ANN) architectures. We show that prediction performance can vary significantly depending on the choice of model/algorithm, and the type of prediction (for example, classification vs. regression). We also discuss the importance of selecting the right predictor variables, or features, in order to improve the performance of these algorithms. The models are evaluated using operational data from the National Airspace System (NAS) of the United States. The ANN is shown to be a good algorithm for the classification problem, where it attains an average accuracy of nearly 94% in predicting whether or not delays on the 100 most-delayed links will exceed 60 min, looking two hours into the future. The MJLS model, however, is better at predicting the actual delay levels on different links, and has a mean prediction error of 4.7 min for the regression problem, for a 2 hr horizon. MJLS is also better at predicting outbound delays at the 30 major airports, with a mean error of 6.8 min, for a 2 hr prediction horizon. The effect of temporal factors, and the spatial distribution of current delays, in predicting future delays are also compared. The MJLS model, which is specifically designed to capture aggregate air traffic dynamics, leverages on these factors and outperforms the ANN in predicting the future spatial distribution of delays. In this manner, a tradeoff between model simplicity and prediction accuracy is revealed. Keywordsdelay prediction; network delays; machine learning; artificial neural networks; data mining",
"title": ""
},
{
"docid": "c23008c36f0bca7a1faf405c5f3083ff",
"text": "The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.",
"title": ""
},
{
"docid": "0a7a2cfe41f1a04982034ef9cb42c3d4",
"text": "The biocontrol agent Torymus sinensis has been released into Japan, the USA, and Europe to suppress the Asian chestnut gall wasp, Dryocosmus kuriphilus. In this study, we provide a quantitative assessment of T. sinensis effectiveness for suppressing gall wasp infestations in Northwest Italy by annually evaluating the percentage of chestnuts infested by D. kuriphilus (infestation rate) and the number of T. sinensis adults that emerged per 100 galls (emergence index) over a 9-year period. We recorded the number of T. sinensis adults emerging from a total of 64,000 galls collected from 23 sampling sites. We found that T. sinensis strongly reduced the D. kuriphilus population, as demonstrated by reduced galls and an increased T. sinensis emergence index. Specifically, in Northwest Italy, the infestation rate was nearly zero 9 years after release of the parasitoid with no evidence of resurgence in infestation levels. In 2012, the number of T. sinensis females emerging per 100 galls was approximately 20 times higher than in 2009. Overall, T. sinensis proved to be an outstanding biocontrol agent, and its success highlights how the classical biological control approach may represent a cost-effective tool for managing an exotic invasive pest.",
"title": ""
},
{
"docid": "2b12ec1371ac37cc313a5f9a6ac53e31",
"text": "How many labeled examples are needed to estimate a classifier's performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semi supervised Performance Evaluation (SPE), is based on a generative model for the classifier's confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by re-estimating the class-conditional confidence distributions.",
"title": ""
},
{
"docid": "7957ba93e63f753336281fcb31e35cab",
"text": "This paper proposed a method that combines Polar Fourier Transform, color moments, and vein features to retrieve leaf images based on a leaf image. The method is very useful to help people in recognizing foliage plants. Foliage plants are plants that have various colors and unique patterns in the leaf. Therefore, the colors and its patterns are information that should be counted on in the processing of plant identification. To compare the performance of retrieving system to other result, the experiments used Flavia dataset, which is very popular in recognizing plants. The result shows that the method gave better performance than PNN, SVM, and Fourier Transform. The method was also tested using foliage plants with various colors. The accuracy was 90.80% for 50 kinds of plants.",
"title": ""
},
{
"docid": "b59e332c086a8ce6d6ddc0526b8848c7",
"text": "We propose Generative Adversarial Tree Search (GATS), a sample-efficient Deep Reinforcement Learning (DRL) algorithm. While Monte Carlo Tree Search (MCTS) is known to be effective for search and planning in RL, it is often sampleinefficient and therefore expensive to apply in practice. In this work, we develop a Generative Adversarial Network (GAN) architecture to model an environment’s dynamics and a predictor model for the reward function. We exploit collected data from interaction with the environment to learn these models, which we then use for model-based planning. During planning, we deploy a finite depth MCTS, using the learned model for tree search and a learned Q-value for the leaves, to find the best action. We theoretically show that GATS improves the bias-variance tradeoff in value-based DRL. Moreover, we show that the generative model learns the model dynamics using orders of magnitude fewer samples than the Q-learner. In non-stationary settings where the environment model changes, we find the generative model adapts significantly faster than the Q-learner to the new environment.",
"title": ""
},
{
"docid": "52b1adf3b7b6bf08651c140d726143c3",
"text": "The antifungal potential of aqueous leaf and fruit extracts of Capsicum frutescens against four major fungal strains associated with groundnut storage was evaluated. These seed-borne fungi, namely Aspergillus flavus, A. niger, Penicillium sp. and Rhizopus sp. were isolated by standard agar plate method and identified by macroscopic and microscopic features. The minimum inhibitory concentrations (MIC) and minimum fungicidal concentration (MFC) of C. frutescens extracts were determined. MIC values of the fruit extract were lower compared to the leaf extract. At MIC, leaf extract showed strong activity against A. flavus (88.06%), while fruit extract against A. niger (88.33%) in the well diffusion method. Groundnut seeds treated with C.frutescens fruit extract (10mg/ml) showed a higher rate of fungal inhibition. The present results suggest that groundnuts treated with C. frutescens fruit extracts are capable of preventing fungal infection to a certain extent.",
"title": ""
},
{
"docid": "6b527c906789f6e32cd5c28f684d9cc8",
"text": "This paper addresses an essential application of microkernels; its role in virtualization for embedded systems. Virtualization in embedded systems and microkernel-based virtualization are topics of intensive research today. As embedded systems specifically mobile phones are evolving to do everything that a PC does, employing virtualization in this case is another step to make this vision a reality. Hence, recently, much time and research effort have been employed to validate ways to host virtualization on embedded system processors i.e., the ARM processors. This paper reviews the research work that have had significant impact on the implementation approaches of virtualization in embedded systems and how these approaches additionally provide security features that are beneficial to equipment manufacturers, carrier service providers and end users.",
"title": ""
},
{
"docid": "3bee9a2d5f9e328bb07c3c76c80612fa",
"text": "In this paper, we construct a complexity-based morphospace wherein one can study systems-level properties of conscious and intelligent systems based on information-theoretic measures. The axes of this space labels three distinct complexity types, necessary to classify conscious machines, namely, autonomous, cognitive and social complexity. In particular, we use this morphospace to compare biologically conscious agents ranging from bacteria, bees, C. elegans, primates and humans with artificially intelligence systems such as deep networks, multi-agent systems, social robots, AI applications such as Siri and computational systems as Watson. Given recent proposals to synthesize consciousness, a generic complexitybased conceptualization provides a useful framework for identifying defining features of distinct classes of conscious and synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, this article takes a perspective on how contemporary artificially intelligent machines and synthetically engineered life forms would measure on these scales. It turns out that awareness and wakefulness can be associated to computational and autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, and argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity necessary for describing consciousness, namely, social complexity. Having identified these complexity types, allows for a representation of both, biological and synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. In particular, we identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), and (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the ar X iv :1 70 5. 11 19 0v 3 [ qbi o. N C ] 2 4 N ov 2 01 8 The Morphospace of Consciousness 2 crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence and biomimetics.",
"title": ""
},
{
"docid": "3f9bb5e1b9b6d4d44cb9741a32f7325f",
"text": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "184319fbdee41de23718bb0831c53472",
"text": "Localization is a prominent application and research area in Wireless Sensor Networks. Various research studies have been carried out on localization techniques and algorithms in order to improve localization accuracy. Received signal strength indicator is a parameter, which has been widely used in localization algorithms in many research studies. There are several environmental and other factors that affect the localization accuracy and reliability. This study introduces a new technique to increase the localization accuracy by employing a dynamic distance reference anchor method. In order to investigate the performance improvement obtained with the proposed technique, simulation models have been developed, and results have been analyzed. The simulation results show that considerable improvement in localization accuracy can be achieved with the proposed model.",
"title": ""
},
{
"docid": "3deb967a4e683b4a38b9143b105a5f2a",
"text": "BACKGROUND\nThe Brief Obsessive Compulsive Scale (BOCS), derived from the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) and the children's version (CY-BOCS), is a short self-report tool used to aid in the assessment of obsessive-compulsive symptoms and diagnosis of obsessive-compulsive disorder (OCD). It is widely used throughout child, adolescent and adult psychiatry settings in Sweden but has not been validated up to date.\n\n\nAIM\nThe aim of the current study was to examine the psychometric properties of the BOCS amongst a psychiatric outpatient population.\n\n\nMETHOD\nThe BOCS consists of a 15-item Symptom Checklist including three items (hoarding, dysmorphophobia and self-harm) related to the DSM-5 category \"Obsessive-compulsive related disorders\", accompanied by a single six-item Severity Scale for obsessions and compulsions combined. It encompasses the revisions made in the Y-BOCS-II severity scale by including obsessive-compulsive free intervals, extent of avoidance and excluding the resistance item. 402 adult psychiatric outpatients with OCD, attention-deficit/hyperactivity disorder, autism spectrum disorder and other psychiatric disorders completed the BOCS.\n\n\nRESULTS\nPrincipal component factor analysis produced five subscales titled \"Symmetry\", \"Forbidden thoughts\", \"Contamination\", \"Magical thoughts\" and \"Dysmorphic thoughts\". The OCD group scored higher than the other diagnostic groups in all subscales (P < 0.001). Sensitivities, specificities and internal consistency for both the Symptom Checklist and the Severity Scale emerged high (Symptom Checklist: sensitivity = 85%, specificities = 62-70% Cronbach's α = 0.81; Severity Scale: sensitivity = 72%, specificities = 75-84%, Cronbach's α = 0.94).\n\n\nCONCLUSIONS\nThe BOCS has the ability to discriminate OCD from other non-OCD related psychiatric disorders. The current study provides strong support for the utility of the BOCS in the assessment of obsessive-compulsive symptoms in clinical psychiatry.",
"title": ""
},
{
"docid": "6d5480bf1ee5d401e39f5e65d0aaba25",
"text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.",
"title": ""
},
{
"docid": "bbbbe3f926de28d04328f1de9bf39d1a",
"text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.",
"title": ""
},
{
"docid": "6a455fd9c86feb287a3c5a103bb681de",
"text": "This paper presents two approaches to semantic search by incorporating Linked Data annotations of documents into a Generalized Vector Space Model. One model exploits taxonomic relationships among entities in documents and queries, while the other model computes term weights based on semantic relationships within a document. We publish an evaluation dataset with annotated documents and queries as well as user-rated relevance assessments. The evaluation on this dataset shows significant improvements of both models over traditional keyword based search.",
"title": ""
},
{
"docid": "ba16a6634b415dd2c478c83e1f65cb3c",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
},
{
"docid": "985df151ccbc9bf47b05cffde47a6342",
"text": "This paper establishes the criteria to ensure stable operation of two-stage, bidirectional, isolated AC-DC converters. The bi-directional converter is analyzed in the context of a building block module (BBM) that enables a fully modular architecture for universal power flow conversion applications (AC-DC, DC-AC and DC-DC). The BBM consists of independently controlled AC-DC and isolated DC-DC converters that are cascaded for bidirectional power flow applications. The cascaded converters have different control objectives in different directions of power flow. This paper discusses methods to obtain the appropriate input and output impedances that determine stability in the context of bi-directional AC-DC power conversion. Design procedures to ensure stable operation with minimal interaction between the cascaded stages are presented. The analysis and design methods are validated through extensive simulation and hardware results.",
"title": ""
}
] |
scidocsrr
|
77f2c45198e80b283c793847b1d043dd
|
Cross-language Learning with Adversarial Neural Networks: Application to Community Question Answering
|
[
{
"docid": "b6d8ba656a85955be9b4f34b07f54987",
"text": "In real-world data, e.g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms. In this paper, we apply Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums. In particular, we use the attention weights for both selecting entire sentences and their subparts, i.e., word/chunk, from shallow syntactic trees. More interestingly, we apply tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking. Our results show that the attention-based pruning allows for achieving the top position in the cQA challenge of SemEval 2016, with a relatively large gap from the other participants while greatly decreasing running time.",
"title": ""
}
] |
[
{
"docid": "ba5a7cd34ee251b2499031c91c3aa707",
"text": "Many patients with end-stage cardiomyopathy are now being implanted with Total Artificial Hearts (TAHs). We have observed individual cases of post-operative mechanical ventilator autocycling with a flow trigger, and subsequent loss of autocycling after switching to a pressure trigger. These observations prompted us to do a retrospective review of all TAH devices placed at our institution between August 2007 and May 2009. We found that in the immediate post-operative period following TAH placement, autocycling was present in 50% (5/10) of cases. There was immediate cessation of autocycling in all patients after being changed from a flow trigger of 2 L/minute to a pressure trigger of 2 cm H2O. The autocycling group was found to have significantly higher CVP values than the non-autocycling group (P = 0.012). Our data suggest that mechanical ventilator autocycling may be resolved or prevented by the use of a pressure trigger rather than a flow trigger setting in patients with TAHs who require mechanical ventilation.",
"title": ""
},
{
"docid": "1c960375b6cdebfbd65ea0124dcdce0f",
"text": "Parameterized unit tests extend the current industry practice of using closed unit tests defined as parameterless methods. Parameterized unit tests separate two concerns: 1) They specify the external behavior of the involved methods for all test arguments. 2) Test cases can be re-obtained as traditional closed unit tests by instantiating the parameterized unit tests. Symbolic execution and constraint solving can be used to automatically choose a minimal set of inputs that exercise a parameterized unit test with respect to possible code paths of the implementation. In addition, parameterized unit tests can be used as symbolic summaries which allows symbolic execution to scale for arbitrary abstraction levels. We have developed a prototype tool which computes test cases from parameterized unit tests. We report on its first use testing parts of the .NET base class library.",
"title": ""
},
{
"docid": "c96fa07ef9860880d391a750826f5faf",
"text": "This paper presents the investigations of short-circuit current, electromagnetic force, and transient dynamic response of windings deformation including mechanical stress, strain, and displacements for an oil-immersed-type 220-kV power transformer. The worst-case fault with three-phase short-circuit happening simultaneously is assumed. A considerable leakage magnetic field excited by short-circuit current can produce the dynamical electromagnetic force to act on copper disks in each winding. The two-dimensional finite element method (FEM) is employed to obtain the electromagnetic force and its dynamical characteristics in axial and radial directions. In addition, to calculate the windings deformation accurately, we measured the nonlinear elasticity characteristic of spacer and built three-dimensional FE kinetic model to analyze the axial dynamic deformation. The results of dynamic mechanical stress and strain induced by combining of short-circuit force and prestress are useful for transformer design and fault diagnosis.",
"title": ""
},
{
"docid": "2eabe3d3edbc9b57b1a13c41688b9d68",
"text": "This paper presents a design method of on-chip patch antenna integration in a standard CMOS technology without post processing. A 60 GHz on-chip patch antenna is designed utilizing the top metal layer and an intermediate metal layer as the patch and ground plane, respectively. Interference between the patch and digital baseband circuits located beneath the ground plane is analyzed. The 60 GHz on-chip antenna occupies an area of 1220 µm by 1580 µm with carefully placed fillers and slots to meet the design rules of the CMOS process. The antenna is centered at 60.51 GHz with 810 MHz bandwidth. The peak gain and radiation efficiency are −3.32 dBi and 15.87%, respectively. Analysis for mutual signal coupling between the antenna and the clock H-tree beneath the ground plane is reported, showing a −61 dB coupling from the antenna to the H-tree and a −95 dB coupling of 2 GHz clock signal from the H-tree to the antenna.",
"title": ""
},
{
"docid": "4ff5953f4c81a6c77f46c66763d791dc",
"text": "We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character/word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.",
"title": ""
},
{
"docid": "38d957350e02bc714696e6068e38cdc7",
"text": "Laparoscopic Roux-en-Y gastric bypass (LRYGB) has become the gold standard for surgical weight loss. The success of LRYGB may be measured by excess body mass index loss (%EBMIL) over 25 kg/m2, which is partially determined by multiple patient factors. In this study, artificial neural network (ANN) modeling was used to derive a reasonable estimate of expected postoperative weight loss using only known preoperative patient variables. Additionally, ANN modeling allowed for the discriminant prediction of achievement of benchmark 50 % EBMIL at 1 year postoperatively. Six hundred and forty-seven LRYGB included patients were retrospectively reviewed for preoperative factors independently associated with EBMIL at 180 and 365 days postoperatively (EBMIL180 and EBMIL365, respectively). Previously validated factors were selectively analyzed, including age; race; gender; preoperative BMI (BMI0); hemoglobin; and diagnoses of hypertension (HTN), diabetes mellitus (DM), and depression or anxiety disorder. Variables significant upon multivariate analysis (P < .05) were modeled by “traditional” multiple linear regression and an ANN, to predict %EBMIL180 and %EBMIL365. The mean EBMIL180 and EBMIL365 were 56.4 ± 16.5 % and 73.5 ± 21.5 %, corresponding to total body weight losses of 25.7 ± 5.9 % and 33.6 ± 8.0 %, respectively. Upon multivariate analysis, independent factors associated with EBMIL180 included black race (B = −6.3 %, P < .001), BMI0 (B = −1.1 %/unit BMI, P < .001), and DM (B = −3.2 %, P < .004). For EBMIL365, independently associated factors were female gender (B = 6.4 %, P < .001), black race (B = −6.7 %, P < .001), BMI0 (B = −1.2 %/unit BMI, P < .001), HTN (B = −3.7 %, P = .03), and DM (B = −6.0 %, P < .001). Pearson r 2 values for the multiple linear regression and ANN models were 0.38 (EBMIL180) and 0.35 (EBMIL365), and 0.42 (EBMIL180) and 0.38 (EBMIL365), respectively. ANN prediction of benchmark 50 % EBMIL at 365 days generated an area under the curve of 0.78 ± 0.03 in the training set (n = 518) and 0.83 ± 0.04 (n = 129) in the validation set. Available at https://redcap.vanderbilt.edu/surveys/?s=3HCR43AKXR , this or other ANN models may be used to provide an optimized estimate of postoperative EBMIL following LRYGB.",
"title": ""
},
{
"docid": "d735cfbf58094aac2fe0a324491fdfe7",
"text": "We present AutoExtend, a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset/lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.",
"title": ""
},
{
"docid": "7d391483dfe60f4ad60735264a0b7ab2",
"text": "The growing interest and the market for indoor Location Based Service (LBS) have been drivers for a huge demand for building data and reconstructing and updating of indoor maps in recent years. The traditional static surveying and mapping methods can't meet the requirements for accuracy, efficiency and productivity in a complicated indoor environment. Utilizing a Simultaneous Localization and Mapping (SLAM)-based mapping system with ranging and/or camera sensors providing point cloud data for the maps is an auspicious alternative to solve such challenges. There are various kinds of implementations with different sensors, for instance LiDAR, depth cameras, event cameras, etc. Due to the different budgets, the hardware investments and the accuracy requirements of indoor maps are diverse. However, limited studies on evaluation of these mapping systems are available to offer a guideline of appropriate hardware selection. In this paper we try to characterize them and provide some extensive references for SLAM or mapping system selection for different applications. Two different indoor scenes (a L shaped corridor and an open style library) were selected to review and compare three different mapping systems, namely: (1) a commercial Matterport system equipped with depth cameras; (2) SLAMMER: a high accuracy small footprint LiDAR with a fusion of hector-slam and graph-slam approaches; and (3) NAVIS: a low-cost large footprint LiDAR with Improved Maximum Likelihood Estimation (IMLE) algorithm developed by the Finnish Geospatial Research Institute (FGI). Firstly, an L shaped corridor (2nd floor of FGI) with approximately 80 m length was selected as the testing field for Matterport testing. Due to the lack of quantitative evaluation of Matterport indoor mapping performance, we attempted to characterize the pros and cons of the system by carrying out six field tests with different settings. The results showed that the mapping trajectory would influence the final mapping results and therefore, there was optimal Matterport configuration for better indoor mapping results. Secondly, a medium-size indoor environment (the FGI open library) was selected for evaluation of the mapping accuracy of these three indoor mapping technologies: SLAMMER, NAVIS and Matterport. Indoor referenced maps were collected with a small footprint Terrestrial Laser Scanner (TLS) and using spherical registration targets. The 2D indoor maps generated by these three mapping technologies were assessed by comparing them with the reference 2D map for accuracy evaluation; two feature selection methods were also utilized for the evaluation: interactive selection and minimum bounding rectangles (MBRs) selection. The mapping RMS errors of SLAMMER, NAVIS and Matterport were 2.0 cm, 3.9 cm and 4.4 cm, respectively, for the interactively selected features, and the corresponding values using MBR features were 1.7 cm, 3.2 cm and 4.7 cm. The corresponding detection rates for the feature points were 100%, 98.9%, 92.3% for the interactive selected features and 100%, 97.3% and 94.7% for the automated processing. The results indicated that the accuracy of all the evaluated systems could generate indoor map at centimeter-level, but also variation of the density and quality of collected point clouds determined the applicability of a system into a specific LBS.",
"title": ""
},
{
"docid": "008b5ae7c256a52853fcdbd413931829",
"text": "We present applications of rough set methods for feature selection in pattern recognition. We emphasize the role of the basic constructs of rough set approach in feature selection, namely reducts and their approximations, including dynamic reducts. In the overview of methods for feature selection we discuss feature selection criteria, including the rough set based methods. Our algorithm for feature selection is based on an application of a rough set method to the result of principal components analysis (PCA) used for feature projection and reduction. Finally, the paper presents numerical results of face and mammogram recognition experiments using neural network, with feature selection based on proposed PCA and rough set methods. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e990bb6cd0949c1e293f6856ae1edb3",
"text": "Atherosclerosis, the leading cause of death in the developed world and nearly the leading cause in the developing world, is associated with systemic risk factors including hypertension, smoking, hyperlipidemia, and diabetes mellitus, among others. Nonetheless, atherosclerosis remains a geometrically focal disease, preferentially affecting the outer edges of vessel bifurcations. In these predisposed areas, hemodynamic shear stress, the frictional force acting on the endothelial cell surface as a result of blood flow, is weaker than in protected regions. Studies have identified hemodynamic shear stress as an important determinant of endothelial function and phenotype. Arterial-level shear stress (>15 dyne/cm2) induces endothelial quiescence and an atheroprotective gene expression profile, while low shear stress (<4 dyne/cm2), which is prevalent at atherosclerosis-prone sites, stimulates an atherogenic phenotype. The functional regulation of the endothelium by local hemodynamic shear stress provides a model for understanding the focal propensity of atherosclerosis in the setting of systemic factors and may help guide future therapeutic strategies.",
"title": ""
},
{
"docid": "e19b68314e61f96dea0d7d98f80ca19b",
"text": "With growing interest in adversarial machine learning, it is important for practitioners and users of machine learning to understand how their models may be attacked. We present a web-based visualization tool, ADVERSARIALPLAYGROUND, to demonstrate the efficacy of common adversarial methods against a convolutional neural network. ADVERSARIAL-PLAYGROUND provides users an efficient and effective experience in exploring algorithms for generating adversarial examples — samples crafted by an adversary to fool a machine learning system. To enable fast and accurate responses to users, our webapp employs two key features: (1) We split the visualization and evasive sample generation duties between client and server while minimizing the transferred data. (2) We introduce a variant of the Jacobian Saliency Map Approach that is faster and yet maintains a comparable evasion rate 1.",
"title": ""
},
{
"docid": "812e21d0ee5db1499fea91b8be42cd0a",
"text": "Ciphertext-policy attribute-based encryption (CP-ABE) has been proposed to enable fine-grained access control on encrypted data for cloud storage service. In the context of CP-ABE, since the decryption privilege is shared by multiple users who have the same attributes, it is difficult to identify the original key owner when given an exposed key. This leaves the malicious cloud users a chance to leak their access credentials to outsourced data in clouds for profits without the risk of being caught, which severely damages data security. To address this problem, we add the property of traceability to the conventional CP-ABE. To catch people leaking their access credentials to outsourced data in clouds for profits effectively, in this paper, we first propose two kinds of non-interactive commitments for traitor tracing. Then we present a fully secure traceable CP-ABE system for cloud storage service from the proposed commitment. Our proposed commitments for traitor tracing may be of independent interest, as they are both pairing-friendly and homomorphic. We also provide extensive experimental results to confirm the feasibility and efficiency of the proposed solution.",
"title": ""
},
{
"docid": "57c0f9c629e4fdcbb0a4ca2d4f93322f",
"text": "Chronic exertional compartment syndrome and medial tibial stress syndrome are uncommon conditions that affect long-distance runners or players involved in team sports that require extensive running. We report 2 cases of bilateral chronic exertional compartment syndrome, with medial tibial stress syndrome in identical twins diagnosed with the use of a Kodiag monitor (B. Braun Medical, Sheffield, United Kingdom) fulfilling the modified diagnostic criteria for chronic exertional compartment syndrome as described by Pedowitz et al, which includes: (1) pre-exercise compartment pressure level >15 mm Hg; (2) 1 minute post-exercise pressure >30 mm Hg; and (3) 5 minutes post-exercise pressure >20 mm Hg in the presence of clinical features. Both patients were treated with bilateral anterior fasciotomies through minimal incision and deep posterior fasciotomies with tibial periosteal stripping performed through longer anteromedial incisions under direct vision followed by intensive physiotherapy resulting in complete symptomatic recovery. The etiology of chronic exertional compartment syndrome is not fully understood, but it is postulated abnormal increases in intramuscular pressure during exercise impair local perfusion, causing ischemic muscle pain. No familial predisposition has been reported to date. However, some authors have found that no significant difference exists in the relative perfusion, in patients, diagnosed with chronic exertional compartment syndrome. Magnetic resonance images of affected compartments have indicated that the pain is not due to ischemia, but rather from a disproportionate oxygen supply versus demand. We believe this is the first report of chronic exertional compartment syndrome with medial tibial stress syndrome in twins, raising the question of whether there is a genetic predisposition to the causation of these conditions.",
"title": ""
},
{
"docid": "427c5f5825ca06350986a311957c6322",
"text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.",
"title": ""
},
{
"docid": "f2cdaf0198077253d9c0738cabab367a",
"text": "In this paper, we present an approach for automatically creating a Combinatory Categorial Grammar (CCG) treebank from a dependency treebank for the Subject-Object-Verb language Hindi. Rather than a direct conversion from dependency trees to CCG trees, we propose a two stage approach: a language independent generic algorithm first extracts a CCG lexicon from the dependency treebank. A deterministic CCG parser then creates a treebank of CCG derivations. We also discuss special cases of this generic algorithm to handle linguistic phenomena specific to Hindi. In doing so we extract different constructions with long-range dependencies like coordinate constructions and non-projective dependencies resulting from constructions like relative clauses, noun elaboration and verbal modifiers.",
"title": ""
},
{
"docid": "3a314a72ea2911844a5a3462d052f4e7",
"text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.",
"title": ""
},
{
"docid": "4a39ad1bac4327a70f077afa1d08c3f0",
"text": "Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many approaches to many IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. The aim of this full- day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR.",
"title": ""
},
{
"docid": "fdc6de60d4564efc3b94b44873ecd179",
"text": "Fault detection and diagnosis is an important problem in process engineering. It is the central component of abnormal event management (AEM) which has attracted a lot of attention recently. AEM deals with the timely detection, diagnosis and correction of abnormal conditions of faults in a process. Early detection and diagnosis of process faults while the plant is still operating in a controllable region can help avoid abnormal event progression and reduce productivity loss. Since the petrochemical industries lose an estimated 20 billion dollars every year, they have rated AEM as their number one problem that needs to be solved. Hence, there is considerable interest in this field now from industrial practitioners as well as academic researchers, as opposed to a decade or so ago. There is an abundance of literature on process fault diagnosis ranging from analytical methods to artificial intelligence and statistical approaches. From a modelling perspective, there are methods that require accurate process models, semi-quantitative models, or qualitative models. At the other end of the spectrum, there are methods that do not assume any form of model information and rely only on historic process data. In addition, given the process knowledge, there are different search techniques that can be applied to perform diagnosis. Such a collection of bewildering array of methodologies and alternatives often poses a difficult challenge to any aspirant who is not a specialist in these techniques. Some of these ideas seem so far apart from one another that a non-expert researcher or practitioner is often left wondering about the suitability of a method for his or her diagnostic situation. While there have been some excellent reviews in this field in the past, they often focused on a particular branch, such as analytical models, of this broad discipline. The basic aim of this three part series of papers is to provide a systematic and comparative study of various diagnostic methods from different perspectives. We broadly classify fault diagnosis methods into three general categories and review them in three parts. They are quantitative model-based methods, qualitative model-based methods, and process history based methods. In the first part of the series, the problem of fault diagnosis is introduced and approaches based on quantitative models are reviewed. In the remaining two parts, methods based on qualitative models and process history data are reviewed. Furthermore, these disparate methods will be compared and evaluated based on a common set of criteria introduced in the first part of the series. We conclude the series with a discussion on the relationship of fault diagnosis to other process operations and on emerging trends such as hybrid blackboard-based frameworks for fault diagnosis. # 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "2841935e11a246a68d71cca27728f387",
"text": "Unintentional falls are a common cause of severe injury in the elderly population. By introducing small, non-invasive sensor motes in conjunction with a wireless network, the Ivy Project aims to provide a path towards more independent living for the elderly. Using a small device worn on the waist and a network of fixed motes in the home environment, we can detect the occurrence of a fall and the location of the victim. Low-cost and low-power MEMS accelerometers are used to detect the fall while RF signal strength is used to locate the person",
"title": ""
},
{
"docid": "a2a76f1ca05797cfdf76a6a50cee3d7e",
"text": "In a P2P system, a client peer may select one or more server peers to download a specific file. In a P2P resource economy, the server peers charge the client for the downloading. A server peer's price would naturally depend on the specific object being downloaded, the duration of the download, and the rate at which the download is to occur. The optimal peer selection problem is to select, from the set of peers that have the desired object, the subset of peers and download rates that minimizes cost. In this paper we examine a number of natural peer selection problems for both P2P downloading and P2P streaming. For downloading, we obtain the optimal solution for minimizing the download delay subject to a budget constraint, as well as the corresponding Nash equilibrium. For the streaming problem, we obtain a solution that minimizes cost subject to continuous playback while allowing for one or more server peers to fail during the streaming process. The methodologies developed in this paper are applicable to a variety of P2P resource economy problems.",
"title": ""
}
] |
scidocsrr
|
b215b55035caa4f5127b0bb766712812
|
Low-loss compact ku-band waveguide low-pass filter
|
[
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
}
] |
[
{
"docid": "9363421f524b4990c5314298a7e56e80",
"text": "hree years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats 1. Google Brain's discovery that the Inter-net is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language. Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again. Such advances make for exciting times in THE LEARNING MACHINES Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence.",
"title": ""
},
{
"docid": "33eeb883ae070fdc1b5a1eb656bce6b9",
"text": "Traffic Congestion is one of many serious global problems in all great cities resulted from rapid urbanization which always exert negative externalities upon society. The solution of traffic congestion is highly geocentric and due to its heterogeneous nature, curbing congestion is one of the hard tasks for transport planners. It is not possible to suggest unique traffic congestion management framework which could be absolutely applied for every great cities. Conversely, it is quite feasible to develop a framework which could be used with or without minor adjustment to deal with congestion problem. So, the main aim of this paper is to prepare a traffic congestion mitigation framework which will be useful for urban planners, transport planners, civil engineers, transport policy makers, congestion management researchers who are directly or indirectly involved or willing to involve in the task of traffic congestion management. Literature review is the main source of information of this study. In this paper, firstly, traffic congestion is defined on the theoretical point of view and then the causes of traffic congestion are briefly described. After describing the causes, common management measures, using worldwide, are described and framework for supply side and demand side congestion management measures are prepared.",
"title": ""
},
{
"docid": "857550d124e8267111370c1ca7117ee1",
"text": "Social media offers politicians an opportunity to bypass traditional media and directly influence their audience's opinions and behavior through framing. Using data from Twitter about how members of the U.S. Congress use hashtags, we examine to what extent politicians participate in framing, which issues received the most framing efforts, and which politicians exhibited the highest rates of framing. We find that politicians actively use social media to frame issues by choosing both topics to discuss and specific hashtags within topics, and that recognizably divisive issues receive the most framing efforts. Finally, we find that voting patterns generally align with tweeting patterns; however, several notable exceptions suggest our methodology can provide a more nuanced picture of Congress than voting records alone.",
"title": ""
},
{
"docid": "6f13d2d8e511f13f6979859a32e68fdd",
"text": "As an innovative measurement technique, the so-called Fiber Bragg Grating (FBG) sensors are used to measure local and global strains in a growing number of application scenarios. FBGs facilitate a reliable method to sense strain over large distances and in explosive atmospheres. Currently, there is only little knowledge available concerning mechanical properties of FGBs, e.g. under quasi-static, cyclic and thermal loads. To address this issue, this work quantifies typical loads on FGB sensors in operating state and moreover aims to determine their mechanical response resulting from certain load cases. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "2f84b44cdce52068b7e692dad7feb178",
"text": "Two stage PCR has been used to introduce single amino acid substitutions into the EF hand structures of the Ca(2+)-activated photoprotein aequorin. Transcription of PCR products, followed by cell free translation of the mRNA, allowed characterisation of recombinant proteins in vitro. Substitution of D to A at position 119 produced an active photoprotein with a Ca2+ affinity reduced by a factor of 20 compared to the wild type recombinant aequorin. This recombinant protein will be suitable for measuring Ca2+ inside the endoplasmic reticulum, the mitochondria, endosomes and the outside of live cells.",
"title": ""
},
{
"docid": "4f0b32fb335a0a19f431ddc1b7785c05",
"text": "Dental implants have proven to be a successful treatment option in fully and partially edentulous patients, rendering long-term functional and esthetic outcomes. Various factors are crucial for predictable long-term peri-implant tissue stability, including the biologic width; the papilla height and the mucosal soft-tissue level; the amounts of soft-tissue volume and keratinized tissue; and the biotype of the mucosa. The biotype of the mucosa is congenitally set, whereas many other parameters can, to some extent, be influenced by the treatment itself. Clinically, the choice of the dental implant and the position in a vertical and horizontal direction can substantially influence the establishment of the biologic width and subsequently the location of the buccal mucosa and the papilla height. Current treatment concepts predominantly focus on providing optimized peri-implant soft-tissue conditions before the start of the prosthetic phase and insertion of the final reconstruction. These include refined surgical techniques and the use of materials from autogenous and xenogenic origins to augment soft-tissue volume and keratinized tissue around dental implants, thereby mimicking the appearance of natural teeth.",
"title": ""
},
{
"docid": "27fa3f76bd1e097afd389582ee929837",
"text": "Prevalence of morbid obesity is rising. Along with it, the adipose associated co-morbidities increase - included panniculus morbidus, the end stage of obesity of the abdominal wall. In the course of time panniculus often develop a herniation of bowel. An incarcerated hernia and acute exacerbation of a chronic inflammation of the panniculus must be treated immediately and presents a surgical challenge. The resection of such massive abdominal panniculus presents several technical problems to the surgeon. Preparation of long standing or fixed hernias may require demanding adhesiolysis. The wound created is huge and difficult to manage, and accompanied by considerable complications at the outset. We provide a comprehensive overview of a possible approach for panniculectomy and hernia repair and overlook of the existing literature.",
"title": ""
},
{
"docid": "d80ac637e703288e1d2035934d4f8ebf",
"text": "s. Putz-Anderson, V. 1988. Cumulative trauma disorders: A manual for musculoskeletal diseases of the upper limbs. Bristol, PA: Taylor & Francis. Ramos, G., and R. Balakrishnan. 2005. Zliding: Fluid Zooming and Sliding for High Precision Parameter Manipulation. Paper read at UIST 2006. Ramos, G., and R. Balakrishnan. 2006. Pressure Marks. Paper read at UNPUBLISHED MANUSCRIPT (under review). Ramos, G., M. Boulos, and R. Balakrishnan. 2004. Pressure Widgets. Paper read at CHI 2004. Raskar, R., P. Beardsley, J. van Baar, Y. Wang, P. Dietz, J. Lee, D. Leigh, and T. Willwacher. 2004. RFIG lamps: interacting with a self-describing world via photosensing wireless tags and projectors. 23 (3): 406-415. Raskar, Ramesh, Jeroen van Baar, Paul Beardsley, Thomas Willwacher, Srinivas Rao, and Clifton Forlines. 2005. iLamps: geometrically aware and self-configuring projectors. In ACM SIGGRAPH 2005 Courses. Los Angeles, California: ACM. Raskin, J. 2000. The Humane Interface: New Directions for Designing Interactive Systems: ACM Press. Rekimoto, J., Y. Ayatsuka, M. Kohno, and Haruo Oba. 2003. Proximal Interactions: A Direct Manipulation Technique for Wireless Networking. Paper read at INTERACT 2003. Rekimoto, Jun. 1996. Tilting operations for small screen interfaces. Paper read at ACM UIST Symposium on User Interface Software and Technology, at New York. Rekimoto, Jun. 1998. A multiple-device approach for supporting whiteboard based interaction. Paper read at ACM CHI Conference on Human Factors in Computing Systems, at New York. Rempel, D, J Bach, L Gordon, and R. Tal. 1998. Effects of forearm pronation/supination on carpal tunnel pressure. J Hand Surgery 23 (1):38-42. Rime, B., and L. Schiaratura. 1991. Gesture and speech. In Fundamentals of Nonverbal Behaviour. New York: Press Syndacate of the University of Cambridge. Rutledge, J., and T. Selker. 1990. Force-to-Motion Functions for Pointing. Paper read at Proceedings of Interact '90: The IFIP Conf. on Human-Computer Interaction, Aug. 27-31. Rutledge, Joseph D., and Ted Selker. 1990. Force-to-motion functions for pointing. In Proceedings of the IFIP TC13 Third Interational Conference on Human-Computer Interaction: North-Holland Publishing Co. Ryall, Kathy, Clifton Forlines, Chia Shen, and Meredith Ringel Morris. 2004. Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In Proceedings of the 2004 ACM conference on Computer supported cooperative work. Chicago, Illinois, USA: ACM. Saponas, T. Scott, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. Victoria, BC, Canada: ACM. Saund, Eric, and Edward Lank. 2003. Stylus input and editing without prior selection of mode. In Proceedings of the 16th annual ACM symposium on User interface software and technology. Vancouver, Canada: ACM. Sawhney, N., and Chris M. Schmandt. 2000. Nomadic Radio: Speech and Audio Interaction for Contextual Messaging in Nomadic Environments. ACM Transactions on Computer-Human Interaction 7 (3):353-383.",
"title": ""
},
{
"docid": "c988dc0e9be171a5fcb555aedcdf67e3",
"text": "Online social networks, such as Facebook, are increasingly utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private. Yet it is possible to use learning algorithms on released data to predict private information. In this paper, we explore how to launch inference attacks using released social networking data to predict private information. We then devise three possible sanitization techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use methods of collective inference to discover sensitive attributes of the data set. We show that we can decrease the effectiveness of both local and relational classification algorithms by using the sanitization methods we described.",
"title": ""
},
{
"docid": "d7cb103c0dd2e7c8395438950f83da3f",
"text": "We address the effects of packaging on performance, reliability and cost of photonic devices. For silicon photonics we address some specific packaging aspects. Finally we propose an approach for integration of photonics and ASICs.",
"title": ""
},
{
"docid": "4e23bf1c89373abaf5dc096f76c893f3",
"text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.",
"title": ""
},
{
"docid": "e0ae0929df9b396d35f02c8dc2e2487a",
"text": "While selecting the hyper-parameters of Neural Networks (NNs) has been so far treated as an art, the emergence of more complex, deeper architectures poses increasingly more challenges to designers and Machine Learning (ML) practitioners, especially when power and memory constraints need to be considered. In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization and random search in the context of power- and memory-constrained hyperparameter optimization for NNs running on a given hardware platform. HyperPower is the first work (i) to show that power consumption can be used as a low-cost, a priori known constraint, and (ii) to propose predictive models for the power and memory of NNs executing on GPUs. Thanks to HyperPower, the number of function evaluations and the best test error achieved by a constraint-unaware method are reached up to 112.99× and 30.12× faster, respectively, while never considering invalid configurations. HyperPower significantly speeds up the hyper-parameter optimization, achieving up to 57.20× more function evaluations compared to constraint-unaware methods for a given time interval, effectively yielding significant accuracy improvements by up to 67.6%.",
"title": ""
},
{
"docid": "7fef9bfd0e71a08d5574affb91d0c9ed",
"text": "This paper presents a novel 3D indoor Laser-aided Inertial Navigation System (L-INS) for the visually impaired. An Extended Kalman Filter (EKF) fuses information from an Inertial Measurement Unit (IMU) and a 2D laser scanner, to concurrently estimate the six degree-of-freedom (d.o.f.) position and orientation (pose) of the person and a 3D map of the environment. The IMU measurements are integrated to obtain pose estimates, which are subsequently corrected using line-to-plane correspondences between linear segments in the laser-scan data and orthogonal structural planes of the building. Exploiting the orthogonal building planes ensures fast and efficient initialization and estimation of the map features while providing human-interpretable layout of the environment. The L-INS is experimentally validated by a person traversing a multistory building, and the results demonstrate the reliability and accuracy of the proposed method for indoor localization and mapping.",
"title": ""
},
{
"docid": "9dd245f75092adc8d8bb2b151275789b",
"text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"title": ""
},
{
"docid": "134f44bb808d5e873161819ebb175af5",
"text": "Like most behavior, consumer behavior too is goal driven. In turn, goals constitute cognitive constructs that can be chronically active as well as primed by features of the environment. Goal systems theory outlines the principles that characterize the dynamics of goal pursuit and explores their implications for consumer behavior. In this vein, we discuss from a common, goal systemic, perspective a variety of well known phenomena in the realm of consumer behavior including brand loyalty, variety seeking, impulsive buying, preferences, choices and regret. The goal systemic perspective affords guidelines for subsequent research on the dynamic aspects of consummatory behavior as well as offering insights into practical matters in the area of marketing. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bf7679eedfe88210b70105d50ae8acf4",
"text": "Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs (see Figure 1).",
"title": ""
},
{
"docid": "9ccbd750bd39e0451d98a7371c2b0914",
"text": "The aim of this study was to assess the effect of inspiratory muscle training (IMT) on resistance to fatigue of the diaphragm (D), parasternal (PS), sternocleidomastoid (SCM) and scalene (SC) muscles in healthy humans during exhaustive exercise. Daily inspiratory muscle strength training was performed for 3 weeks in 10 male subjects (at a pressure threshold load of 60% of maximal inspiratory pressure (MIP) for the first week, 70% of MIP for the second week, and 80% of MIP for the third week). Before and after training, subjects performed an incremental cycle test to exhaustion. Maximal inspiratory pressure and EMG-analysis served as indices of inspiratory muscle fatigue assessment. The before-to-after exercise decreases in MIP and centroid frequency (fc) of the EMG (D, PS, SCM, and SC) power spectrum (P<0.05) were observed in all subjects before the IMT intervention. Such changes were absent after the IMT. The study found that in healthy subjects, IMT results in significant increase in MIP (+18%), a delay of inspiratory muscle fatigue during exhaustive exercise, and a significant improvement in maximal work performance. We conclude that the IMT elicits resistance to the development of inspiratory muscles fatigue during high-intensity exercise.",
"title": ""
},
{
"docid": "3dc61fefea09d89313b2f110429e6d24",
"text": "Noise Reduction is one of the most important steps in very broad domain of image processing applications such as face identification, motion tracking, visual pattern recognition and etc. Texture images are covered a huge number of images where are collected as database in these applications. In this paper an approach is proposed for noise reduction in texture images which is based on real word spelling correction theory in natural language processing.The proposed approach is included two main steps. In the first step, most similar pixels to noisy desired pixel in terms of textural features are generated using local binary pattern. Next, best one of the candidates is selected based on two-gram algorithm. The quality of the proposed approach is compared with some of state of the art noise reduction filters in the result part. High accuracy, Low blurring effect, and low computational complexity are some advantages of the proposed approach.",
"title": ""
},
{
"docid": "4c261e2b54a12270f158299733942a5f",
"text": "Applying Data Mining (DM) in education is an emerging interdisciplinary research field also known as Educational Data Mining (EDM). Ensemble techniques have been successfully applied in the context of supervised learning to increase the accuracy and stability of prediction. In this paper, we present a hybrid procedure based on ensemble classification and clustering that enables academicians to firstly predict students’ academic performance and then place each student in a well-defined cluster for further advising. Additionally, it endows instructors an anticipated estimation of their students’ capabilities during team forming and in-class participation. For ensemble classification, we use multiple classifiers (Decision Trees-J48, Naïve Bayes and Random Forest) to improve the quality of student data by eliminating noisy instances, and hence improving predictive accuracy. We then use the approach of bootstrap (sampling with replacement) averaging, which consists of running k-means clustering algorithm to convergence of the training data and averaging similar cluster centroids to obtain a single model. We empirically compare our technique with other ensemble techniques on real world education datasets.",
"title": ""
}
] |
scidocsrr
|
3e31341c83b5cca2382bde89f1bdafda
|
Enabling Sparse Winograd Convolution by Native Pruning
|
[
{
"docid": "785b1e2b8cf185c0ffa044d62309c711",
"text": "Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN’s size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1–7.3× convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.",
"title": ""
},
{
"docid": "7539c44b888e21384dc266d1cf397be0",
"text": "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108× and 17.7× respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.",
"title": ""
},
{
"docid": "fff3bd0d4b56beac7710a2c1b1035d73",
"text": "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.",
"title": ""
},
{
"docid": "d00957d93af7b2551073ba84b6c0f2a6",
"text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn",
"title": ""
},
{
"docid": "c0d9ad40f655f7eb3402725ddde4abcf",
"text": "Convolution operations dominate the total execution time of deep convolutional neural networks (CNNs). In this paper, we aim at enhancing the performance of the state-of-the-art convolution algorithm (called Winograd convolution) on the GPU. Our work is based on two observations: (1) CNNs often have abundant zero weights and (2) the performance benefit of Winograd convolution is limited mainly due to extra additions incurred during data transformation. In order to exploit abundant zero weights, we propose a low-overhead and efficient hardware mechanism that skips multiplications that will always give zero results regardless of input data (called ZeroSkip). In addition, to leverage the second observation, we present data reuse optimization for addition operations in Winograd convolution (called AddOpt), which improves the utilization of local registers, thereby reducing on-chip cache accesses. Our experiments with a real-world deep CNN, VGG-16, on GPGPU-Sim and Titan X show that the proposed methods, ZeroSkip and AddOpt, achieve 51.8% higher convolution performance than the baseline Winograd convolution. Moreover, even without any hardware modification, AddOpt alone gives 35.6% higher performance on a real hardware platform, Titan X.",
"title": ""
}
] |
[
{
"docid": "b5c53afda0b8af1ecd1e973dd7cdd101",
"text": "MOTIVATION\nProtein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction.\n\n\nMETHOD\nThis paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question.\n\n\nRESULTS\nOur method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then.\n\n\nAVAILABILITY\nhttp://raptorx.uchicago.edu/ContactMap/.",
"title": ""
},
{
"docid": "2d87e26389b9d4ebf896bd9cbd281e69",
"text": "Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.",
"title": ""
},
{
"docid": "235e192cc8d0e7e020d5bde490ead034",
"text": "We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior. We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors.",
"title": ""
},
{
"docid": "de66a8238e9c71471ada4cf19ccfe15b",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. SUMMARY In this paper we investigate the out-of-sample forecasting ability of feedforward and recurrent neural networks based on empirical foreign exchange rate data. A two-step procedure is proposed to construct suitable networks, in which networks are selected based on the predictive stochastic complexity (PSC) criterion, and the selected networks are estimated using both recursive Newton algorithms and the method of nonlinear least squares. Our results show that PSC is a sensible criterion for selecting networks and for certain exchange rate series, some selected network models have significant market timing ability and/or significantly lower out-of-sample mean squared prediction error relative to the random walk model.",
"title": ""
},
{
"docid": "39b4b7e77e357c9cc73038498f0f2cd1",
"text": "Traditional machine learning algorithms often fail to generalize to new input distributions, causing reduced accuracy. Domain adaptation attempts to compensate for the performance degradation by transferring and adapting source knowledge to target domain. Existing unsupervised methods project domains into a lower-dimensional space and attempt to align the subspace bases, effectively learning a mapping from source to target points or vice versa. However, they fail to take into account the difference of the two distributions in the subspaces, resulting in misalignment even after adaptation. We present a unified view of existing subspace mapping based methods and develop a generalized approach that also aligns the distributions as well as the subspace bases. Background. Domain adaptation, or covariate shift, is a fundamental problem in machine learning, and has attracted a lot of attention in the machine learning and computer vision community. Domain adaptation methods for visual data attempt to learn classifiers on a labeled source domain and transfer it to a target domain. There are two settings for visual domain adaptation: (1) unsupervised domain adaptation where there are no labeled examples available in the target domain; and (2) semisupervised domain adaptation where there are a few labeled examples in the target domain. Most existing algorithms operate in the semi-superised setting. However, in real world applications, unlabeled target data is often much more abundant and labeled examples are very limited, so the question of how to utilize the unlabeled target data is more important for practical visual domain adaptation. Thus, in this paper, we focus on the unsupervised scenario. Most of the existing unsupervised approaches have pursued adaptation by separately projecting the source and target data into a lowerdimensional manifold, and finding a transformation that brings the subspaces closer together. This process is illustrated in Figure 1. Geodesic methods [2, 3] find a path along the subspace manifold, and either project source and target onto points along that path [3], or find a closed-form linear map that projects source points to target [2]. Alternatively, the subspaces can be aligned by computing the linear map that minimizes the Frobenius norm of the difference between them, a method known as Subspace Alignment [1]. Approach. The intuition behind our approach is that although the existing approaches might align the subspaces (the bases of the subspaces), it might not fully align the data distributions in the subspaces as illustrated in Figure 1. We use the firstand second-order statistics, namely the mean and the variance, to describe a distribution in this paper. Since the mean after data preprocessing (i.e. normalization) is zero and is not affected 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1 NA SA SDA−TS GFK SDA−IS 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=3 NA SA SDA−TS GFK SDA−IS Figure 2: Mean accuracy across all 12 experiment settings (domain shifts) of the k-NN Classifier on the Office-Caltech10 dataset. Both our methods SDA-IS and SDA-TS outperform GFK and SA consistently. Left: k-NN Classifier with k=1; Right: k-NN Classifier with k=3. 10 20 30 40 50 60 70 80 90 100 15 20 25 30 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1",
"title": ""
},
{
"docid": "b362920d15a92cd8f001ec1f13186956",
"text": "Automatically extracting relationships such as causality from text presents a challenge at the frontiers of natural language processing. This thesis focuses on annotating and automatically tagging causal expressions and their cause and effect arguments. One popular paradigm for such tasks is SHALLOW SEMANTIC PARSING—marking relations and their arguments in text. Efforts to date have focused on individual propositions expressed by individual words. While this approach has been fruitful, it falters on semantic relationships that can be expressed by more complex linguistic patterns than words. It also struggles when multiple meanings are entangled in the same expression. Causality exhibits both challenges: it can be expressed using a variety of words, multi-word expressions, or even complex patterns spanning multiple clauses. Additionally, causality competes for linguistic space with phenomena such as temporal relations and obligation (e.g., allow can indicate causality, permission, or both). To expand shallow semantic parsing to such challenging relations, this thesis presents approaches based on the linguistic paradigm known as CONSTRUCTION GRAMMAR (CxG). CxG places arbitrarily complex form/function pairings called CONSTRUCTIONS at the heart of both syntax and semantics. Because constructions pair meanings with arbitrary forms, CxG allows predicates to be expressed by any linguistic pattern, no matter how complex. This thesis advocates for a new “surface construction labeling” (SCL) approach to applying CxG: given a relation of interest, such as causality, we annotate just the words that consistently signal a construction expressing that relation. Then, to automatically tag such constructions and their arguments, we need not wait for automated CxG tools that can analyze all the underlying grammatical constructions. Instead, we can build on top of existing tools, approximating the underlying constructions with patterns of words and conventional linguistic categories. The contributions of this thesis include a CxG-based annotation scheme and methodology for annotating explicit causal relations in English; an annotated corpus based on this scheme; and three methods for automatically tagging causal constructions. The first two tagging methods use a pipeline architecture based on tentative pattern-matching to combine automatically induced rules with statistical classifiers. The third method is a transition-based deep neural network. The thesis demonstrates the promise of these methods, discusses the tradeoffs of each, and suggests future applications and extensions.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "d8ef81ed5cd25c98abb8d94dd769f9aa",
"text": "Organic anion transporting polypeptides (OATPs) are a group of membrane transport proteins that facilitate the influx of endogenous and exogenous substances across biological membranes. OATPs are found in enterocytes and hepatocytes and in brain, kidney, and other tissues. In enterocytes, OATPs facilitate the gastrointestinal absorption of certain orally administered drugs. Fruit juices such as grapefruit juice, orange juice, and apple juice contain substances that are OATP inhibitors. These fruit juices diminish the gastrointestinal absorption of certain antiallergen, antibiotic, antihypertensive, and β-blocker drugs. While there is no evidence, so far, that OATP inhibition affects the absorption of psychotropic medications, there is no room for complacency because the field is still nascent and because the necessary studies have not been conducted. Patients should therefore err on the side of caution, taking their medications at least 4 hours distant from fruit juice intake. Doing so is especially desirable with grapefruit juice, orange juice, and apple juice; with commercial fruit juices in which OATP-inhibiting substances are likely to be present in higher concentrations; with calcium-fortified fruit juices; and with medications such as atenolol and fexofenadine, the absorption of which is substantially diminished by concurrent fruit juice intake.",
"title": ""
},
{
"docid": "45ea01d82897401058492bc2f88369b3",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "6c8b83e0e02e5c0230d57e4885d27e02",
"text": "Contemporary conceptions of physical education pedagogy stress the importance of considering students’ physical, affective, and cognitive developmental states in developing curricula (Aschebrock, 1999; Crum, 1994; Grineski, 1996; Humel, 2000; Hummel & Balz, 1995; Jones & Ward, 1998; Kurz, 1995; Siedentop, 1996; Virgilio, 2000). Sport and physical activity preference is one variable that is likely to change with development. Including activities preferred by girls and boys in physical education curricula could produce several benefits, including greater involvement in lessons and increased enjoyment of physical education (Derner, 1994; Greenwood, Stillwell, & Byars, 2000; Knitt et al., 2000; Lee, Fredenburg, Belcher, & Cleveland, 1999; Sass H. & Sass I., 1986; Strand & Scatling, 1994; Volke, Poszony, & Stumpf, 1985). These are significant goals, because preference for physical activity and enjoyment of physical education are important predictors for overall physical activity participation (Sallis et al., 1999a, b). Although physical education curricula should be based on more than simply students’ preferences, student preferences can inform the design of physical education, other schoolbased physical activity programs, and programs sponsored by other agencies. Young people’s physical activity and sport preferences are likely to vary by age, sex, socio-economic status and nationality. Although several studies have been conducted over many years (Greller & Cochran, 1995; Hoffman & Harris, 2000; Kotonski-Immig, 1994; Lamprecht, Ruschetti, & Stamm, 1991; Strand & Scatling, 1994; Taks, Renson, & Vanreusel, 1991; Telama, 1978; Walton et al., 1999), current understanding of children’s preferences in specific sports and movement activities is limited. One of the main limitations is the cross-sectional nature of the data, so the stability of sport and physical activity preferences over time is not known. The main aim of the present research is to describe the levels and trends in the development of sport and physical activity preferences in girls and boys over a period of five years, from the age of 10 to 14. Further, the study aims to establish the stability of preferences over time.",
"title": ""
},
{
"docid": "cf7eff6c24f333b6bcf30ef8cd8686e0",
"text": "For 4 decades, vigorous efforts have been based on the premise that early intervention for children of poverty and, more recently, for children with developmental disabilities can yield significant improvements in cognitive, academic, and social outcomes. The history of these efforts is briefly summarized and a conceptual framework presented to understand the design, research, and policy relevance of these early interventions. This framework, biosocial developmental contextualism, derives from social ecology, developmental systems theory, developmental epidemiology, and developmental neurobiology. This integrative perspective predicts that fragmented, weak efforts in early intervention are not likely to succeed, whereas intensive, high-quality, ecologically pervasive interventions can and do. Relevant evidence is summarized in 6 principles about efficacy of early intervention. The public policy challenge in early intervention is to contain costs by more precisely targeting early interventions to those who most need and benefit from these interventions. The empirical evidence on biobehavioral effects of early experience and early intervention has direct relevance to federal and state policy development and resource allocation.",
"title": ""
},
{
"docid": "b3850e8e822c1ded3bff73a50d8d0dec",
"text": "It is essential to prevent and reduce the level of cr ss-contamination in the pharmaceutical industry. Different types of residues need to be considered, including APIs ( Active Pharmaceutical Ingredients) residues, degrad ation products (due to different solubility, toxicity, an d cleanability characteristics in comparison with t he original compound), particulates, endotoxin, environmental d ust, residual rinse water (if product must be dry) as well as potential microbial contaminants (1, 2). In order to reach this goal, cleaning validation study should be carried out to provide a document which proves that process of cle aning has been validated and it can be performed re liably and repeatedly (3). In this article we discuss several aspects of cl eaning validation, such as bracketing, calculation of the acceptance criteria, swab sampling, rinse sampling, documentation. Additionally, some basic requiremen ts to provide necessities for environmental and equipment cleanliness, before commencement of cleaning valid ation study, are taken into account. It is worth mentioni ng that a practical approach is adopted to write th is article. Key-Words: Cleaning validation, Acceptance Criteria , Residue, Swab Sampling, Rinse Sampling Introduction Cleaning Validation Definition: Manufacturing processes have to be designed and carried out in a way that prevent cross-contamination as much as possible. Since most pieces of equipment are being used to manufacture different products, cleaning procedure must be able to remove residues from equipment to an acceptable level . Importance and purpose of cleaning validation: Not only it is required to comply with regulations, but also it is necessary to fulfill customers' requirements. It ensures the safety, identity, strength, and purity of the product which are the basic requirements of cGMP (Current Good Manufacturing Practice). It provides manufacturer with enough confidence that internal control is established properly (5, 6) * Corresponding Author E.mail: ramin3205@gmail.com Tel: 00989122168569 Fax: 00982126212319 It is advisable to perform at least three consecutive and successful applications of the cleaning procedure in order to prove that the method is validated . In case of detecting variable residue, following cleaning (especially an acceptable cleaning), enough attention must be given to effectiveness of the process and operators performance . Equipment cleaning validation maybe performed concurrently with actual production steps during process development and clinical manufacturing. Validation programs should be continued through full scale commercial production (1, . For new chemical entities it is essential to perform a risk assessment analysis before any operation in GMP plants . When cleaning validation is necessary according to guidelines of WHO: Product-contact surfaces (Consideration should be given to non-contact parts into which product may migrate for example, seals, flanges, mixing shaft, fans of ovens, heating elements etc.) Review Article [Asgharian et al., 5(3): March, 2014:3345-3366] CODEN (USA): IJPLCP ISSN: 0976-7126 © Sakun Publishing House (SPH): IJPLS 3346 Cleaning after product changeover (when one pharmaceutical formulation is being changed to another, completely different formulation) Between batches in campaigns (when the same formula is being manufactured over a period of time, and on different days). It seems acceptable that a campaign can last a working week, but anything longer becomes difficult to control and define . Cleaning validation for biological drugs must comply with stricter requirements due to their inherent characteristics (proteins are sticky by nature), parenteral product purity requirements, the complexity of equipment and broad spectrum of materials which need to be cleaned . Prevention of cross contamination in production: Production in segregated areas or using \"closed system\" of production Providing appropriate air-lock and air treatment system to prevent recirculation or re-entry of untreated or insufficiently treated air Providing comprehensive instructions to discharge clothing used in areas where products with special risk of crosscontamination are processed Using known effective cleaning and decontamination procedures Testing for residues and use of cleaning status labels on equipment (10) Different types of cleaning: Different mechanisms are employed to remove residues from equipment such as mechanical action, dissolution, detergency, saponification, and chemic al reaction. Mechanical action: In this method residues and contaminants are removed through physical actions such as brushing, scrubbing and using pressurized water. Dissolution: It involves using an appropriate solve nt to dissolve residues. Water is usually selected owning to being non-toxic, economical, environment friendly a nd does not leave any residue. However, some residues are only removed by alkaline or acidic solvents. In this case, usage of these cleaning agents is inevitable. Detergency: Detergent acts in four ways as wetting agent, solubilizer, emulsifier, and dispersant in removing the residues and contaminants from equipment. Wetting agents (such as surfactants) decrease the surface tension of cleaning solution, hus they can easily penetrate into the residue. Saponification: This method is based on the breakag e of ester bond in fat residue to form fatty acid and glycerol which are soluble in water. For this purpo se, some alkalis can be used such as NAOH, KOH. Chemical reaction: Oxidation and hydrolysis reactio n chemically breaks the organic residues (6, 11). Cleaning agents: Detergents are not part of the manufacturing proces s. They should be utilized as less as possible and eve n when they are absolutely required to facilitate cle aning, acceptance limits for cleaning agents residues shou ld be defined. The effectiveness of cleaning procedure s for removal of detergent residues should be evaluat ed. Ideally, no (or for ultra-sensitive analytical test methods-very low) amount of residue should be detected. The composition of detergents should be known to manufacturer and they should ensure that they are notified by supplier of any critical chang es in the formulation of the detergent. Detergents should be acceptable to the QA (Quality Assurance)/ QC (Quali ty Control) departments and no superfluous components such as fragrances and dyes should be included in them. Since most products have ingredients with different solubility characteristics, a suitable combination of cleaning agents would be more effective. If a detergent or soap is used for cleaning, consid er and determine the difficulty that may arise at the time of testing for residues. Separate validation of remova l f cleaning agents is not required if the removal of t he cleaning agent is included in the validation of the equipment cleaning from process compounds. It is al so not required for equipment producing only early intermediates or other residues of chemically synthesized APIs (1, 4, 9, 12, 13, 14) . Cleaning agent parameters to be evaluated: i. Easily removable (some detergents leave persistent residues such as cationic detergents, which adhere very strongly to glass and are difficult to remove). ii. The possibility of detergent breakdown should be considered when a cleaning procedure is being validated. Additionally, strong acids and alkalis used during the cleaning process may result in products breakdown which requires to be deemed during cleaning validation. iii. Materials normally used in the process are preferable iv. The design and construction of equipment and surface materials to be cleaned v. Ease of detection Review Article [Asgharian et al., 5(3): March, 2014:3345-3366] CODEN (USA): IJPLCP ISSN: 0976-7126 © Sakun Publishing House (SPH): IJPLS 3347 vi. Solubility properties of the worst case product (not only the API, possibly exist in small quantity, but also all the substances present in the formulation) vii. Environmental consideration viii. Health and safety consideration ix. Knowledge gained through experience x. Manufacturer's recommendation xi. The minimum temperature and volume of cleaning agent and rinse solution xii. Availability, etc. (1, 5, 15, 16, 17) Acceptable amount of cleaning agents: The limit for detergents and cleaning agents, following cleaning, is calculated based on LD 50 value or 10 ppm criteria, whichever is the lowest. LD 50 can represent toxicological properties of cleaning agents, but cleaning agents generally accepted in pharmaceutica ls feature relatively high LD50 which leads to calculation of high acceptable quantities of residues. Therefor e, it is reasonable to select the lowest amount between L D50 and ppm criteria. Another assessment can be carried out in this way that the amount of the residue does not exceed the detection limit of the method of analysi s for the relevant detergent substance. Calculation of cleaning agents residues based on 10 ppm criterion is the same as the calculation of API s residue based on this criterion. Limit calculation for cleaning agent residues: Calculation of the maximum acceptable residue: 4 50 5 10 70 mg LD kg kg ADI SF − × × × = ( ) ( ) [ ] [ ] B g MACO mg ADI mg D g = × ADI = Acceptable Daily Intake SF= Safety Factor which is applied to consider rout of administration B= Batch Size of the subsequent product D= Daily dose of the subsequent product 5×10-4 times LD50 has no measurable pharmacological effects on humans. 70 kg = Average body weight of an adult (18) Personnel: Because a manual procedure is an inherently variabl e method, operators carrying out this method should b e properly trained, monitored, and periodically asses ed. All training carried out should be recorded (1). Suitable working clothing is also important to prevent sprea ding the particles and dust. Since some potentially harm ful organisms can be transferred by personnel and products,",
"title": ""
},
{
"docid": "1805f6d72d15eca27de9ff5ed273b064",
"text": "BACKGROUND\nThis study, conducted between 1998 and 2001 and analyzed in 2002 and 2003, was designed to test (1) whether exercise is an efficacious treatment for mild to moderate major depressive disorder (MDD), and (2) the dose-response relation of exercise and reduction in depressive symptoms.\n\n\nDESIGN\nThe study was a randomized 2x2 factorial design, plus placebo control.\n\n\nSETTING/PARTICIPANTS\nAll exercise was performed in a supervised laboratory setting with adults (n =80) aged 20 to 45 years diagnosed with mild to moderate MDD.\n\n\nINTERVENTION\nParticipants were randomized to one of four aerobic exercise treatment groups that varied total energy expenditure (7.0 kcal/kg/week or 17.5 kcal/kg/week) and frequency (3 days/week or 5 days/week) or to exercise placebo control (3 days/week flexibility exercise). The 17.5-kcal/kg/week dose is consistent with public health recommendations for physical activity and was termed \"public health dose\" (PHD). The 7.0-kcal/kg/week dose was termed \"low dose\" (LD).\n\n\nMAIN OUTCOME MEASURES\nThe primary outcome was the score on the 17-item Hamilton Rating Scale for Depression (HRSD(17)).\n\n\nRESULTS\nThe main effect of energy expenditure in reducing HRSD(17) scores at 12 weeks was significant. Adjusted mean HRSD(17) scores at 12 weeks were reduced 47% from baseline for PHD, compared with 30% for LD and 29% for control. There was no main effect of exercise frequency at 12 weeks.\n\n\nCONCLUSIONS\nAerobic exercise at a dose consistent with public health recommendations is an effective treatment for MDD of mild to moderate severity. A lower dose is comparable to placebo effect.",
"title": ""
},
{
"docid": "7ac1412d56f00fd2defb4220938d9346",
"text": "Coingestion of protein with carbohydrate (CHO) during recovery from exercise can affect muscle glycogen synthesis, particularly if CHO intake is suboptimal. Another potential benefit of protein feeding is an increased synthesis rate of muscle proteins, as is well documented after resistance exercise. In contrast, the effect of nutrient manipulation on muscle protein kinetics after aerobic exercise remains largely unexplored. We tested the hypothesis that ingesting protein with CHO after a standardized 2-h bout of cycle exercise would increase mixed muscle fractional synthetic rate (FSR) and whole body net protein balance (WBNB) vs. trials matched for total CHO or total energy intake. We also examined whether postexercise glycogen synthesis could be enhanced by adding protein or additional CHO to a feeding protocol that provided 1.2 g CHO x kg(-1) x h(-1), which is the rate generally recommended to maximize this process. Six active men ingested drinks during the first 3 h of recovery that provided either 1.2 g CHO.kg(-1).h(-1) (L-CHO), 1.2 g CHO + 0.4 g protein x kg(-1) x h(-1) (PRO-CHO), or 1.6 g CHO x kg(-1) x h(-1) (H-CHO) in random order. Based on a primed constant infusion of l-[ring-(2)H(5)]phenylalanine, analysis of biopsies (vastus lateralis) obtained at 0 and 4 h of recovery showed that muscle FSR was higher (P < 0.05) in PRO-CHO (0.09 +/- 0.01%/h) vs. both L-CHO (0.07 +/- 0.01%/h) and H-CHO (0.06 +/- 0.01%/h). WBNB assessed using [1-(13)C]leucine was positive only during PRO-CHO, and this was mainly attributable to a reduced rate of protein breakdown. Glycogen synthesis rate was not different between trials. We conclude that ingesting protein with CHO during recovery from aerobic exercise increased muscle FSR and improved WBNB, compared with feeding strategies that provided CHO only and were matched for total CHO or total energy intake. However, adding protein or additional CHO to a feeding strategy that provided 1.2 g CHO x kg(-1) x h(-1) did not further enhance glycogen resynthesis during recovery.",
"title": ""
},
{
"docid": "5588fd19a3d0d73598197ad465315fd6",
"text": "The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.",
"title": ""
},
{
"docid": "73f5e4d9011ce7115fd7ff0be5974a14",
"text": "In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.",
"title": ""
},
{
"docid": "9665d430c2483451ee705f0263c151a0",
"text": "Radio spectrum needed for applications such as mobile telephony, digital video broadcasting (DVB), wireless local area networks (WiFi), wireless sensor networks (ZigBee), and internet of things is enormous and continues to grow exponentially. Since spectrum is limited and the current usage can be inefficient, cognitive radio paradigm has emerged to exploit the licensed and/or underutilized spectrum much more effectively. In this article, we present the motivation for and details of cognitive radio. A critical requirement for cognitive radio is the accurate, real-time estimation of spectrum usage. We thus review various spectrum sensing techniques, propagation effects, interference modeling, spatial randomness, upper layer details, and several existing cognitive radio standards.",
"title": ""
},
{
"docid": "8308fe89676df668e66287a44103980b",
"text": "Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.",
"title": ""
},
{
"docid": "18c5c1795f910d34b831968698c7ea07",
"text": "The growing demand for always-on and low-latency cloud services is driving the creation of globally distributed datacenters. A major factor affecting service availability is reliability of the network, both inside the datacenters and wide-area links connecting them. While several research efforts focus on building scale-out datacenter networks, little has been reported on real network failures and how they impact geo-distributed services. This paper makes one of the first attempts to characterize intra-datacenter and inter-datacenter network failures from a service perspective. We describe a large-scale study analyzing and correlating failure events over three years across multiple datacenters and thousands of network elements such as Access routers, Aggregation switches, Top-of-Rack switches, and long-haul links. Our study reveals several important findings on (a) the availability of network domains, (b) root causes, (c) service impact, (d) effectiveness of repairs, and (e) modeling failures. Finally, we outline steps based on existing network mechanisms to improve service availability.",
"title": ""
},
{
"docid": "4d6bd155102e7431d17f651dc124ffc2",
"text": "Probiotic microorganisms are generally considered to beneficially affect host health when used in adequate amounts. Although generally used in dairy products, they are also widely used in various commercial food products such as fermented meats, cereals, baby foods, fruit juices, and ice creams. Among lactic acid bacteria, Lactobacillus and Bifidobacterium are the most commonly used bacteria in probiotic foods, but they are not resistant to heat treatment. Probiotic food diversity is expected to be greater with the use of probiotics, which are resistant to heat treatment and gastrointestinal system conditions. Bacillus coagulans (B. coagulans) has recently attracted the attention of researchers and food manufacturers, as it exhibits characteristics of both the Bacillus and Lactobacillus genera. B. coagulans is a spore-forming bacterium which is resistant to high temperatures with its probiotic activity. In addition, a large number of studies have been carried out on the low-cost microbial production of industrially valuable products such as lactic acid and various enzymes of B. coagulans which have been used in food production. In this review, the importance of B. coagulans in food industry is discussed. Moreover, some studies on B. coagulans products and the use of B. coagulans as a probiotic in food products are summarized.",
"title": ""
}
] |
scidocsrr
|
8ce82b7b38d42358c49e8b268e524410
|
Fabric Defect Detection Based on Wavelet Decomposition with One Resolution Level
|
[
{
"docid": "b29caaa973e60109fbc2f68e0eb562a6",
"text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.",
"title": ""
}
] |
[
{
"docid": "179675ecf9ef119fcb0bc512995e2920",
"text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.",
"title": ""
},
{
"docid": "d65e4b79ae580d3b8572c1746357f854",
"text": "We present a large-scale object detection system by team PFDet. Our system enables training with huge datasets using 512 GPUs, handles sparsely verified classes, and massive class imbalance. Using our method, we achieved 2nd place in the Google AI Open Images Object Detection Track 2018 on Kaggle. 1",
"title": ""
},
{
"docid": "05b1be7a90432eff4b62675826b77e09",
"text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.",
"title": ""
},
{
"docid": "d6f278b9c9cc72a85c94659729b143bc",
"text": "Diet and physical activity are known as important lifestyle factors in self-management and prevention of many chronic diseases. Mobile sensors such as accelerometers have been used to measure physical activity or detect eating time. In many intervention studies, however, stringent monitoring of overall dietary composition and energy intake is needed. Currently, such a monitoring relies on self-reported data by either entering text or taking an image that represents food intake. These approaches suffer from limitations such as low adherence in technology adoption and time sensitivity to the diet intake context. In order to address these limitations, we introduce development and validation of Speech2Health, a voice-based mobile nutrition monitoring system that devises speech processing, natural language processing (NLP), and text mining techniques in a unified platform to facilitate nutrition monitoring. After converting the spoken data to text, nutrition-specific data are identified within the text using an NLP-based approach that combines standard NLP with our introduced pattern mapping technique. We then develop a tiered matching algorithm to search the food name in our nutrition database and accurately compute calorie intake values. We evaluate Speech2Health using real data collected with 30 participants. Our experimental results show that Speech2Health achieves an accuracy of 92.2% in computing calorie intake. Furthermore, our user study demonstrates that Speech2Health achieves significantly higher scores on technology adoption metrics compared to text-based and image-based nutrition monitoring. Our research demonstrates that new sensor modalities such as voice can be used either standalone or as a complementary source of information to existing modalities to improve the accuracy and acceptability of mobile health technologies for dietary composition monitoring.",
"title": ""
},
{
"docid": "8384e50dad9ed96ff8610dc007c89e97",
"text": "Opinion Mining is a process of automatic extraction of knowledge from the opinion of others about some particular topic or problem. The idea of Opinion mining and Sentiment Analysis tool is to “process a set of search results for a given item, generating a list of product attributes (quality, features etc.) and aggregating opinion”. But with the passage of time more interesting applications and developments came into existence in this area and now its main goal is to make computer able to recognize and generate emotions like human. This paper will try to focus on the basic definitions of Opinion Mining, analysis of linguistic resources required for Opinion Mining, few machine learning techniques on the basis of their usage and importance for the analysis, evaluation of Sentiment classifications and its various applications. KeywordsSentiment Mining, Opinion Mining, Text Classification.",
"title": ""
},
{
"docid": "4a9913930e2e07b867cc701b07e88eaa",
"text": "There is little doubt that the incidence of depression in Britain is increasing. According to research at the Universities of London and Warwick, the incidence of depression among young people has doubled in the past 12 years. However, whether young or old, the question is why and what can be done? There are those who argue that the increasingly common phenomenon of depression is primarily psychological, and best dealt with by counselling. There are others who consider depression as a biochemical phenomenon, best dealt with by antidepressant medication. However, there is a third aspect to the onset and treatment of depression that is given little heed: nutrition. Why would nutrition have anything to do with depression? Firstly, we have seen a significant decline in fruit and vegetable intake (rich in folic acid), in fish intake (rich in essential fats) and an increase in sugar consumption, from 2 lb a year in the 1940s to 150 lb a year in many of today’s teenagers. Each of these nutrients is strongly linked to depression and could, theoretically, contribute to increasing rates of depression. Secondly, if depression is a biochemical imbalance it makes sense to explore how the brain normalises its own biochemistry, using nutrients as the precursors for key neurotransmitters such as serotonin. Thirdly, if 21st century living is extra-stressful, it would be logical to assume that increasing psychological demands would also increase nutritional requirements since the brain is structurally and functionally completely dependent on nutrients. So, what evidence is there to support suboptimal nutrition as a potential contributor to depression? These are the common imbalances connected to nutrition that are known to worsen your mood and motivation:",
"title": ""
},
{
"docid": "c62a2f7fae5d56617b71ffc070a30839",
"text": "Digitization brings new possibilities to ease our daily life activities by the means of assistive technology. Amazon Alexa, Microsoft Cortana, Samsung Bixby, to name only a few, heralded the age of smart personal assistants (SPAs), personified agents that combine artificial intelligence, machine learning, natural language processing and various actuation mechanisms to sense and influence the environment. However, SPA research seems to be highly fragmented among different disciplines, such as computer science, human-computer-interaction and information systems, which leads to ‘reinventing the wheel approaches’ and thus impede progress and conceptual clarity. In this paper, we present an exhaustive, integrative literature review to build a solid basis for future research. We have identified five functional principles and three research domains which appear promising for future research, especially in the information systems field. Hence, we contribute by providing a consolidated, integrated view on prior research and lay the foundation for an SPA classification scheme.",
"title": ""
},
{
"docid": "144c11393bef345c67595661b5b20772",
"text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.",
"title": ""
},
{
"docid": "3947250417c1c5715128533125633d9f",
"text": "Face recognition has received substantial attention from researches in biometrics, pattern recognition field and computer vision communities. Face recognition can be applied in Security measure at Air ports, Passport verification, Criminals list verification in police department, Visa processing , Verification of Electoral identification and Card Security measure at ATM’s. In this paper, a face recognition system for personal identification and verification using Principal Component Analysis (PCA) with Back Propagation Neural Networks (BPNN) is proposed. This system consists on three basic steps which are automatically detect human face image using BPNN, the various facial features extraction, and face recognition are performed based on Principal Component Analysis (PCA) with BPNN. The dimensionality of face image is reduced by the PCA and the recognition is done by the BPNN for efficient and robust face recognition. In this paper also focuses on the face database with different sources of variations, especially Pose, Expression, Accessories, Lighting and backgrounds would be used to advance the state-of-the-art face recognition technologies aiming at practical applications",
"title": ""
},
{
"docid": "dc33e4c6352c885fb27e08fa1c310fb3",
"text": "Association rule mining algorithm is used to extract relevant information from database and transmit into simple and easiest form. Association rule mining is used in large set of data. It is used for mining frequent item sets in the database or in data warehouse. It is also one type of data mining procedure. In this paper some of the association rule mining algorithms such as apriori, partition, FP-growth, genetic algorithm etc., can be analyzed for generating frequent itemset in an effective manner. These association rule mining algorithms may differ depend upon their performance and effective pattern generation. So, this paper may concentrate on some of the algorithms used to generate efficient frequent itemset using some of association rule mining algorithms.",
"title": ""
},
{
"docid": "2633e4567f94af48031155c38af951dc",
"text": "Teaching in the clinical environment is a demanding, complex and often frustrating task, a task many clinicians assume without adequate preparation or orientation. Twelve roles have previously been described for medical teachers, grouped into six major tasks: (1) the information provider; (2) the role model; (3) the facilitator; (4) the assessor; (5) the curriculum and course planner; and (6) the resource material creator (Harden & Crosby 2000). It is clear that many of these roles require a teacher to be more than a medical expert. In a pure educational setting, teachers may have limited roles, but the clinical teacher often plays many roles simultaneously, switching from one role to another during the same encounter. The large majority of clinical teachers around the world have received rigorous training in medical knowledge and skills but little to none in teaching. As physicians become ever busier in their own clinical practice, being effective teachers becomes more challenging in the context of expanding clinical responsibilities and shrinking time for teaching (Prideaux et al. 2000). Clinicians on the frontline are often unaware of educational mandates from licensing and accreditation bodies as well as medical schools and postgraduate training programmes and this has major implications for staff training. Institutions need to provide necessary orientation and training for their clinical teachers. This Guide looks at the many challenges for teachers in the clinical environment, application of relevant educational theories to the clinical context and practical teaching tips for clinical teachers. This guide will concentrate on the hospital setting as teaching within the community is the subject of another AMEE guide.",
"title": ""
},
{
"docid": "0d12680a9e8b4e158e566154e362ae18",
"text": "The paper deals with the small-signal stability analysis of aircraft ac frequency-wild power systems representing a real ac-dc hybrid distribution architecture with a multiplicity of actuators, aircraft loads, and bus geometries. The dq modelling approach is applied to derive individual power system component models and to constitute the corresponding generalized power system model as a powerful and flexible stability analysis tool. The element models can be interconnected in an algorithmic way according to a variety of the architecture selected. Intensive time-domain simulation and experimental results are used to verify the theoretical results. It is also shown how the proposed approach can be used to predict instability due to possible variations in operating points and system parameters.",
"title": ""
},
{
"docid": "82bea5203ab102bbef0b8663d999abb2",
"text": "This paper proposes a novel simplified two-diode model of a photovoltaic (PV) module. The main aim of this study is to represent a PV module as an ideal two-diode model. In order to reduce computational time, the proposed model has a photocurrent source, i.e., two ideal diodes, neglecting the series and shunt resistances. Only four unknown parameters from the datasheet are required in order to analyze the proposed model. The simulation results that are obtained by MATLAB/Simulink are validated with experimental data of a commercial PV module, using different PV technologies such as multicrystalline and monocrystalline, supplied by the manufacturer. It is envisaged that this work can be useful for professionals who require a simple and accurate PV simulator for their design.",
"title": ""
},
{
"docid": "ddd15d0b877d3f9ae8c8cb104178fcf0",
"text": "Senescence, defined as irreversible cell-cycle arrest, is the main driving force of aging and age-related diseases. Here, we performed high-throughput screening to identify compounds that alleviate senescence and identified the ataxia telangiectasia mutated (ATM) inhibitor KU-60019 as an effective agent. To elucidate the mechanism underlying ATM's role in senescence, we performed a yeast two-hybrid screen and found that ATM interacted with the vacuolar ATPase V1 subunits ATP6V1E1 and ATP6V1G1. Specifically, ATM decreased E-G dimerization through direct phosphorylation of ATP6V1G1. Attenuation of ATM activity restored the dimerization, thus consequently facilitating assembly of the V1 and V0 domains with concomitant reacidification of the lysosome. In turn, this reacidification induced the functional recovery of the lysosome/autophagy system and was coupled with mitochondrial functional recovery and metabolic reprogramming. Together, our data reveal a new mechanism through which senescence is controlled by the lysosomal-mitochondrial axis, whose function is modulated by the fine-tuning of ATM activity.",
"title": ""
},
{
"docid": "61406f27199acc5f034c2721d66cda89",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "6bc5c22383dff2dce6eb210b71b0a583",
"text": "This paper focuses on routing for vehicles getting access to infrastructure either directly or via multiple hops through other vehicles. We study Routing Protocol for Low power and lossy networks (RPL), a tree-based routing protocol designed for sensor networks. Many design elements from RPL are transferable to the vehicular environment. We provide a simulation performance study of RPL and RPL tuning in VANETs. More specifically, we seek to study the impact of RPL’s various parameters and external factors (e.g., various timers and speeds) on its performance and obtain insights on RPL tuning for its use in VANETs. We then fine tune RPL and obtain performance gain over existing RPL.",
"title": ""
},
{
"docid": "7a718827578d63ff9b7187be7e486051",
"text": "In this paper, we propose an adaptive specification-based intrusion detection system (IDS) for detecting malicious unmanned air vehicles (UAVs) in an airborne system in which continuity of operation is of the utmost importance. An IDS audits UAVs in a distributed system to determine if the UAVs are functioning normally or are operating under malicious attacks. We investigate the impact of reckless, random, and opportunistic attacker behaviors (modes which many historical cyber attacks have used) on the effectiveness of our behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks. Through a comparative analysis with the multiagent system/ant-colony clustering model, we demonstrate a high detection accuracy of BRUIDS for compliant performance. By adjusting the detection strength, BRUIDS can effectively trade higher false positives for lower false negatives to cope with more sophisticated random and opportunistic attackers to support ultrasafe and secure UAV applications.",
"title": ""
},
{
"docid": "02bc5f32c3a0abdd88d035836de479c9",
"text": "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.",
"title": ""
},
{
"docid": "91c792fac981d027ac1f2a2773674b10",
"text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.",
"title": ""
},
{
"docid": "94af221c857462b51e14f527010fccde",
"text": "The immunology of the hygiene hypothesis of allergy is complex and involves the loss of cellular and humoral immunoregulatory pathways as a result of the adoption of a Western lifestyle and the disappearance of chronic infectious diseases. The influence of diet and reduced microbiome diversity now forms the foundation of scientific thinking on how the allergy epidemic occurred, although clear mechanistic insights into the process in humans are still lacking. Here we propose that barrier epithelial cells are heavily influenced by environmental factors and by microbiome-derived danger signals and metabolites, and thus act as important rheostats for immunoregulation, particularly during early postnatal development. Preventive strategies based on this new knowledge could exploit the diversity of the microbial world and the way humans react to it, and possibly restore old symbiotic relationships that have been lost in recent times, without causing disease or requiring a return to an unhygienic life style.",
"title": ""
}
] |
scidocsrr
|
a3c76af8cf36d32456a053940e3fcf26
|
A Discriminative Approach to Topic-Based Citation Recommendation
|
[
{
"docid": "e5874c373f9bc4565249f335560023ff",
"text": "We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.",
"title": ""
}
] |
[
{
"docid": "c8d56c100db663ba532df4766e458345",
"text": "Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.",
"title": ""
},
{
"docid": "ff685a2272377e3c8b3596ed92eaccd8",
"text": "The goal of control law design for haptic displays is to provide a safe and stable user interface while maximizing the operator’s sense of kinesthetic immersion in a virtual environment. This paper outlines a control design approach which guarantees the stability of a haptic interface when coupled to a broad class of human operators and virtual environments. Two-port absolute stability criteria are used to develop explicit control law design bounds for two different haptic display implementations: impedance display and admittance display. The strengths and weaknesses of each approach are illustrated through numerical and experimental results for a three degree-of-freedom device. The example highlights the ability of the proposed design procedure to handle some of the more difficult problems in control law synthesis for haptics, including structural flexibility and non-collocation of sensors and actuators. The authors are with the Department of Electrical Engineering University of Washington, Box 352500 Seattle, WA 98195-2500 * corresponding author submitted to IEEE Transactions on Control System Technology 9-7-99 2",
"title": ""
},
{
"docid": "3322fec5c8d2a92aead09817454f1927",
"text": "Reproducing experiments is an important instrument to validate previous work and build upon existing approaches. It has been tackled numerous times in different areas of science. In this paper, we introduce an empirical replicability study of three well-known algorithms for syntactic centric aspect-based opinion mining. We show that reproducing results continues to be a difficult endeavor, mainly due to the lack of details regarding preprocessing and parameter setting, as well as due to the absence of available implementations that clarify these details. We consider these are important threats to validity of the research on the field, specifically when compared to other problems in NLP where public datasets and code availability are critical validity components. We conclude by encouraging code-based research, which we think has a key role in helping researchers to understand the meaning of the state-of-the-art better and to generate continuous advances.",
"title": ""
},
{
"docid": "80541e2df85384fa15074d4178cfa4ae",
"text": "For the first time, we demonstrate the possibility of realizing low-cost mm-Wave antennas using inkjet printing of silver nano-particles. It is widely spread that fabrication of mm-Wave antennas and microwave circuits using the typical (deposit/pattern/etch) scheme is a challenging and costly process, due to the strict limitations on permissible tolerances. Such fabrication technique becomes even more challenging when dealing with flexible substrate materials, such as liquid crystal polymers. On the other hand, inkjet printing of conductive inks managed to form an emerging fabrication technology that has gained lots of attention over the last few years. Such process allows the deposition of conductive particles directly at the desired location on a substrate of interest, without need for mask productions, alignments, or etching. This means the inkjet printing of conductive materials could present the future of environment-friendly low-cost rapid manufacturing of RF circuits and antennas.",
"title": ""
},
{
"docid": "78967df4396e6d3d430f6349386debe9",
"text": "High-performance computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL, two open-source toolkits that support this technique. In introducing PyCUDA and PyOpenCL, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static systems. The concept of RTCG is simple and easily implemented using existing, robust infrastructure. Nonetheless it is powerful enough to support (and encourage) the creation of custom application-specific tools by its users. The premise of the paper is illustrated by a wide range of examples where the technique has been applied with considerable success.",
"title": ""
},
{
"docid": "5a5b9f8ed901ef82878c23065835a473",
"text": "This paper describes a method for constructing a minimal deterministic finite automaton (DFA) from a regular expression. It is based on a set of graph grammar rules for combining many graphs (DFA) to obtain another desired graph (DFA). The graph grammar rules are presented in the form of a parsing algorithm that converts a regular expression R into a minimal deterministic finite automaton M such that the language accepted by DFA M is same as the language described by regular expression R. The proposed algorithm removes the dependency over the necessity of lengthy chain of conversion, that is, regular expression → NFA with ε-transitions → NFA without εtransitions → DFA → minimal DFA. Therefore the main advantage of our minimal DFA construction algorithm is its minimal intermediate memory requirements and hence, the reduced time complexity. The proposed algorithm converts a regular expression of size n in to its minimal equivalent DFA in O(n.log2n) time. In addition to the above, the time complexity is further shortened to O(n.logen) for n ≥ 75. General Terms Algorithms, Complexity of Algorithm, Regular Expression, Deterministic Finite Automata (DFA), Minimal DFA.",
"title": ""
},
{
"docid": "029649fb50a98bbefc3448e5e3eef572",
"text": "Ichthyosisis an infrequent clinical entity worldwide with an incidence of 1:600,000 births. It can be one of the two types: collodion baby and Harlequin fetus or malignant keratoma (most severe form). The clinical manifestations in either form are thick and hard skin with deep splits. Affected babies are born in a collodion membrane, a shiny waxy outer layer to the skin that is shed 10 - 14 days after birth, revealing the main symptom of the disease. The reported case is of a neonate, born to primigravida mother at seven and a half month's gestation with a birth weight of 2160 grams and Apgar score of 6/10 and 8/10 at 1 and 5 minutes respectively. Conclusively, early diagnosis of this condition can help cope and prevent serious morbidity or even mortality at time. These newborns should be monitored carefully in intensive care units by a multi-disciplinary team.",
"title": ""
},
{
"docid": "5c5c0b0a391240c17ee899290f5e4a93",
"text": "We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.",
"title": ""
},
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "2923d1776422a1f44395f169f0d61995",
"text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.",
"title": ""
},
{
"docid": "0f241395c49f4cbfdb211230d09d5727",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/leb.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "eac322eae08da165b436308336aac37a",
"text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.",
"title": ""
},
{
"docid": "e871e2b5bd1ed95fd5302e71f42208bf",
"text": "Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "ad808ef13f173eda961b6157a766f1a9",
"text": "Written text often provides sufficient clues to identify the author, their gender, age, and other important attributes. Consequently, the authorship of training and evaluation corpora can have unforeseen impacts, including differing model performance for different user groups, as well as privacy implications. In this paper, we propose an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes. Evaluating on two tasks, we show that this leads to increased privacy in the learned representations, as well as more robust models to varying evaluation conditions, including out-of-domain corpora.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "bd2391cf9b76d8f4db3e35d42042783f",
"text": "We present a novel approach for synthesizing photorealistic images of people in arbitrary poses using generative adversarial learning. Given an input image of a person and a desired pose represented by a 2D skeleton, our model renders the image of the same person under the new pose, synthesizing novel views of the parts visible in the input image and hallucinating those that are not seen. This problem has recently been addressed in a supervised manner [16, 35], i.e., during training the ground truth images under the new poses are given to the network. We go beyond these approaches by proposing a fully unsupervised strategy. We tackle this challenging scenario by splitting the problem into two principal subtasks. First, we consider a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Second, we devise a novel loss function that incorporates content and style terms, and aims at producing images of high perceptual quality. Extensive experiments conducted on the DeepFashion dataset demonstrate that the images rendered by our model are very close in appearance to those obtained by fully supervised approaches.",
"title": ""
}
] |
scidocsrr
|
73978955d2ce417fa27d24b49ff98ddb
|
Noise2Void - Learning Denoising from Single Noisy Images
|
[
{
"docid": "ab5cef5e577e2757ecf168a62a229d0b",
"text": "We design a novel network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising. Based on the proposed architecture, we introduce two different variants. The first network involves convolutional layers as a core component, while the second one relies instead on non-local filtering layers and thus it is able to exploit the inherent non-local self-similarity property of natural images. As opposed to most of the existing deep network approaches, which require the training of a specific model for each considered noise level, the proposed models are able to handle a wide range of noise levels using a single set of learned parameters, while they are very robust when the noise degrading the latent image does not match the statistics of the noise used during training. The latter argument is supported by results that we report on publicly available images corrupted by unknown noise and which we compare against solutions obtained by competing methods. At the same time the introduced networks achieve excellent results under additive white Gaussian noise (AWGN), which are comparable to those of the current state-of-the-art network, while they depend on a more shallow architecture with the number of trained parameters being one order of magnitude smaller. These properties make the proposed networks ideal candidates to serve as sub-solvers on restoration methods that deal with general inverse imaging problems such as deblurring, demosaicking, superresolution, etc.",
"title": ""
},
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac",
"text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.",
"title": ""
}
] |
[
{
"docid": "a076df910e5d61d07dacad420dadc242",
"text": "Recognizing objects in fine-grained domains can be extremely challenging due to the subtle differences between subcategories. Discriminative markings are often highly localized, leading traditional object recognition approaches to struggle with the large pose variation often present in these domains. Pose-normalization seeks to align training exemplars, either piecewise by part or globally for the whole object, effectively factoring out differences in pose and in viewing angle. Prior approaches relied on computationally-expensive filter ensembles for part localization and required extensive supervision. This paper proposes two pose-normalized descriptors based on computationally-efficient deformable part models. The first leverages the semantics inherent in strongly-supervised DPM parts. The second exploits weak semantic annotations to learn cross-component correspondences, computing pose-normalized descriptors from the latent parts of a weakly-supervised DPM. These representations enable pooling across pose and viewpoint, in turn facilitating tasks such as fine-grained recognition and attribute prediction. Experiments conducted on the Caltech-UCSD Birds 200 dataset and Berkeley Human Attribute dataset demonstrate significant improvements of our approach over state-of-art algorithms.",
"title": ""
},
{
"docid": "b55ac1254679d0148719cd0776da0705",
"text": "We report on an actuator based on dielectric elastomers that is capable of antagonistic actuation and passive folding. This actuator enables foldability in robots with simple structures. Unlike other antagonistic dielectric elastomer devices, our concept uses elastic hinges to allow the folding of the structure, which also provides an additional design parameter. To validate the actuator concept through a specific application test, a foldable elevon actuator with outline size of 70 mm × 130 mm is developed with angular displacement range and torque specifications matched to a 400-mm wingspan micro-air vehicle (MAV) of mass 130 g. A closed-form analytical model of the actuator is constructed, which was used to guide the actuator design. The actuator consists of 125-μm-thick silicone membranes as the dielectric elastomers, 0.2mm-thick fiberglass plate as the frame structure, and 50-μm-thick polyimide as the elastic hinge. We measured voltage-controllable angular displacement up to ±26° and torque of 2720 mN · mm at 5 kV, with good agreement between the model and the measured data. Two elevon actuators are integrated into the MAV, which was successfully flown, with the foldable actuators providing stable and well-controlled flight. The controllability was quantitatively evaluated by calculating the correlation between the control signal and the MAV motion, with a correlation in roll axis of over 0.7 measured during the flights, illustrating the high performance of this foldable actuator.",
"title": ""
},
{
"docid": "b5f8f310f2f4ed083b20f42446d27feb",
"text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.",
"title": ""
},
{
"docid": "eb17078285e6f528d0cd08178e1e57c2",
"text": "This paper proposes a smart queue management system for delivering real-time service request updates to clients' smartphones in the form of audio and visual feedback. The proposed system aims at reducing the dissatisfaction with services with medium to long waiting times. To this end, the system allows carriers of digital ticket to leave the waiting areas and return in time for their turn to receive service. The proposed system also improves the waiting experience of clients choosing to stay in the waiting area by connecting them to the audio signal of the often muted television sets running entertainment programs, advertisement of services, or news. The system is a web of things including connected units for registering and verifying tickets, units for capturing and streaming audio and queue management, and participating client units in the form of smartphone applications. We implemented the proposed system and verified its functionality and report on our findings and areas of improvements.",
"title": ""
},
{
"docid": "efc6c423fa98c012543352db8fb0688a",
"text": "Wireless sensor networks consist of sensor nodes with sensing and communication capabilities. We focus on data aggregation problems in energy constrained sensor networks. The main goal of data aggregation algorithms is to gather and aggregate data in an energy efficient manner so that network lifetime is enhanced. In this paper, we present a survey of data aggregation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency and data accuracy. We conclude with possible future research directions.",
"title": ""
},
{
"docid": "7641f8f3ed2afd0c16665b44c1216e79",
"text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"title": ""
},
{
"docid": "a78c5f726ac3306528b5094b2e8e871c",
"text": "Despite widespread agreement that multi-method assessments are optimal in personality research, the literature is dominated by a single method: self-reports. This pattern seems to be based, at least in part, on widely held preconceptions about the costs of non-self-report methods, such as informant methods. Researchers seem to believe that informant methods are: (a) time-consuming, (b) expensive, (c) ineffective (i.e., informants will not cooperate), and (d) particularly vulnerable to faking or invalid responses. This article evaluates the validity of these preconceptions in light of recent advances in Internet technology, and proposes some strategies for making informant methods more effective. Drawing on data from three separate studies, I demonstrate that, using these strategies, informant reports can be collected with minimal effort and few monetary costs. In addition, informants are generally very willing to cooperate (e.g., response rates of 76–95%) and provide valid data (in terms of strong consensus and self-other agreement). Informant reports represent a mostly untapped resource that researchers can use to improve the validity of personality assessments and to address new questions that cannot be examined with self-reports alone. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "19695936a91f2632911c9f1bee48c11d",
"text": "The purpose of this technical report is two-fold. First of all, it introduces a suite of challenging continuous control tasks (integrated with OpenAI Gym) based on currently existing robotics hardware. The tasks include pushing, sliding and pick & place with a Fetch robotic arm as well as in-hand object manipulation with a Shadow Dexterous Hand. All tasks have sparse binary rewards and follow a Multi-Goal Reinforcement Learning (RL) framework in which an agent is told what to do using an additional input. The second part of the paper presents a set of concrete research ideas for improving RL algorithms, most of which are related to Multi-Goal RL and Hindsight Experience Replay. 1 Environments All environments are released as part of OpenAI Gym1 (Brockman et al., 2016) and use the MuJoCo (Todorov et al., 2012) physics engine for fast and accurate simulation. A video presenting the new environments can be found at https://www.youtube.com/watch?v=8Np3eC_PTFo. 1.1 Fetch environments The Fetch environments are based on the 7-DoF Fetch robotics arm,2 which has a two-fingered parallel gripper. They are very similar to the tasks used in Andrychowicz et al. (2017) but we have added an additional reaching task and the pick & place task is a bit different.3 In all Fetch tasks, the goal is 3-dimensional and describes the desired position of the object (or the end-effector for reaching). Rewards are sparse and binary: The agent obtains a reward of −1 if the object is not at the target location (within a tolerance of 5 cm) and 0 otherwise. Actions are 4-dimensional: 3 dimensions specify the desired gripper movement in Cartesian coordinates and the last dimension controls opening and closing of the gripper. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the Cartesian position of the gripper, its linear velocity as well as the position and linear velocity of the robot’s gripper. If an object is present, we also include the object’s Cartesian position and rotation using Euler angles, its linear and angular velocities, as well as its position and linear velocities relative to gripper. https://github.com/openai/gym http://fetchrobotics.com/ In Andrychowicz et al. (2017) training on this task relied on starting some of the training episodes from a state in which the box is already grasped. This is not necessary for successful training if the target position of the box is sometimes in the air and sometimes on the table and we do not use this technique anymore. ar X iv :1 80 2. 09 46 4v 1 [ cs .L G ] 2 6 Fe b 20 18 Figure 1: The four proposed Fetch environments: FetchReach, FetchPush, FetchSlide, and FetchPickAndPlace. Reaching (FetchReach) The task is to move the gripper to a target position. This task is very easy to learn and is therefore a suitable benchmark to ensure that a new idea works at all.4 Pushing (FetchPush) A box is placed on a table in front of the robot and the task is to move it to a target location on the table. The robot fingers are locked to prevent grasping. The learned behavior is usually a mixture of pushing and rolling. Sliding (FetchSlide) A puck is placed on a long slippery table and the target position is outside of the robot’s reach so that it has to hit the puck with such a force that it slides and then stops at the target location due to friction. Pick & Place (FetchPickAndPlace) The task is to grasp a box and move it to the target location which may be located on the table surface or in the air above it. 1.2 Hand environments These environments are based on the Shadow Dexterous Hand,5 which is an anthropomorphic robotic hand with 24 degrees of freedom. Of those 24 joints, 20 can be can be controlled independently whereas the remaining ones are coupled joints. In all hand tasks, rewards are sparse and binary: The agent obtains a reward of −1 if the goal has been achieved (within some task-specific tolerance) and 0 otherwise. Actions are 20-dimensional: We use absolute position control for all non-coupled joints of the hand. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the 24 positions and velocities of the robot’s joints. In case of an object that is being manipulated, we also include its Cartesian position and rotation represented by a quaternion (hence 7-dimensional) as well as its linear and angular velocities. In the reaching task, we include the Cartesian position of all 5 fingertips. Reaching (HandReach) A simple task in which the goal is 15-dimensional and contains the target Cartesian position of each fingertip of the hand. Similarly to the FetchReach task, this task is relatively easy to learn. A goal is considered achieved if the mean distance between fingertips and their desired position is less than 1 cm. Block manipulation (HandManipulateBlock) In the block manipulation task, a block is placed on the palm of the hand. The task is to then manipulate the block such that a target pose is achieved. The goal is 7-dimensional and includes the target position (in Cartesian coordinates) and target rotation (in quaternions). We include multiple variants with increasing levels of difficulty: • HandManipulateBlockRotateZ Random target rotation around the z axis of the block. No target position. • HandManipulateBlockRotateParallel Random target rotation around the z axis of the block and axis-aligned target rotations for the x and y axes. No target position. • HandManipulateBlockRotateXYZ Random target rotation for all axes of the block. No target position. That being said, we have found that is so easy that even partially broken implementations sometimes learn successful policies, so no conclusions should be drawn from this task alone. https://www.shadowrobot.com/products/dexterous-hand/",
"title": ""
},
{
"docid": "948a05fd3ba939418670e111ec4331f2",
"text": "Traditional techniques of document clustering do not consider the semantic relationships between words when assigning documents to clusters. For instance, if two documents talking about the same topic do that using different words (which may be synonyms or semantically associated), these techniques may assign documents to different clusters. Previous research has approached this problem by enriching the document representation with the background knowledge in an ontology. This paper presents a new approach to enhance document clustering by exploiting the semantic knowledge contained in Wikipedia. We first map terms within documents to their corresponding Wikipedia concepts. Then, similarity between each pair of terms is calculated by using the Wikipedia's link structure. The document’s vector representation is then adjusted so that terms that are semantically related gain more weight. Our approach differs from related efforts in two aspects: first, unlink others who built their own methods of measuring similarity through the Wikipedia categories; our approach uses a similarity measure that is modelled after the Normalized Google Distance which is a well-known and low-cost method of measuring term similarity. Second, it is more time efficient as it applies an algorithm for phrase extraction from documents prior to matching terms with Wikipedia. Our approach was evaluated by being compared with different methods from the state of the art on two different datasets. Empirical results showed that our approach improved the clustering results as compared to other approaches.",
"title": ""
},
{
"docid": "955201c5191774ca14ea38e473bd7d04",
"text": "We advocate a relation based approach to Argumentation Mining. Our focus lies on the extraction of argumentative relations instead of the identification of arguments, themselves. By classifying pairs of sentences according to the relation that holds between them we are able to identify sentences that may be factual when considered in isolation, but carry argumentative meaning when read in context. We describe scenarios in which this is useful, as well as a corpus of annotated sentence pairs we are developing to provide a testbed for this approach.",
"title": ""
},
{
"docid": "fcba75f01ef1b311d5c4ecb4cf952620",
"text": "With the increasing interest in large-scale, high-resolution and real-time geographic information system (GIS) applications and spatial big data processing, traditional GIS is not efficient enough to handle the required loads due to limited computational capabilities.Various attempts have been made to adopt high performance computation techniques from different applications, such as designs of advanced architectures, strategies of data partition and direct parallelization method of spatial analysis algorithm, to address such challenges. This paper surveys the current state of parallel GIS with respect to parallel GIS architectures, parallel processing strategies, and relevant topics. We present the general evolution of the GIS architecture which includes main two parallel GIS architectures based on high performance computing cluster and Hadoop cluster. Then we summarize the current spatial data partition strategies, key methods to realize parallel GIS in the view of data decomposition and progress of the special parallel GIS algorithms. We use the parallel processing of GRASS as a case study. We also identify key problems and future potential research directions of parallel GIS.",
"title": ""
},
{
"docid": "dd5883895261ad581858381bec1b92eb",
"text": "PURPOSE\nTo establish the validity and reliability of a new vertical jump force test (VJFT) for the assessment of bilateral strength asymmetry in a total of 451 athletes.\n\n\nMETHODS\nThe VJFT consists of countermovement jumps with both legs simultaneously: one on a single force platform, the other on a leveled wooden platform. Jumps with the right or the left leg on the force platform were alternated. Bilateral strength asymmetry was calculated as [(stronger leg - weaker leg)/stronger leg] x 100. A positive sign indicates a stronger right leg; a negative sign indicates a stronger left leg. Studies 1 (N = 59) and 2 (N = 41) examined the correlation between the VJFT and other tests of lower-limb bilateral strength asymmetry in male athletes. In study 3, VJFT reliability was assessed in 60 male athletes. In study 4, the effect of rehabilitation on bilateral strength asymmetry was examined in seven male and female athletes 8-12 wk after unilateral knee surgery. In study 5, normative data were determined in 313 male soccer players.\n\n\nRESULTS\nSignificant correlations were found between VJFT and both the isokinetic leg extension test (r = 0.48; 95% confidence interval, 0.26-0.66) and the isometric leg press test (r = 0.83; 0.70-0.91). VJFT test-retest intraclass correlation coefficient was 0.91 (0.85-0.94), and typical error was 2.4%. The change in mean [-0.40% (-1.25 to 0.46%)] was not substantial. Rehabilitation decreased bilateral strength asymmetry (mean +/- SD) of the athletes recovering from unilateral knee surgery from 23 +/- 3 to 10 +/- 4% (P < 0.01). The range of normal bilateral strength asymmetry (2.5th to 97.5th percentiles) was -15 to 15%.\n\n\nCONCLUSIONS\nThe assessment of bilateral strength asymmetry with the VJFT is valid and reliable, and it may be useful in sports medicine.",
"title": ""
},
{
"docid": "96d5a0fb4bb0666934819d162f1b060c",
"text": "Human gait is an important indicator of health, with applications ranging from diagnosis, monitoring, and rehabilitation. In practice, the use of gait analysis has been limited. Existing gait analysis systems are either expensive, intrusive, or require well-controlled environments such as a clinic or a laboratory. We present an accurate gait analysis system that is economical and non-intrusive. Our system is based on the Kinect sensor and thus can extract comprehensive gait information from all parts of the body. Beyond standard stride information, we also measure arm kinematics, demonstrating the wide range of parameters that can be extracted. We further improve over existing work by using information from the entire body to more accurately measure stride intervals. Our system requires no markers or battery-powered sensors, and instead relies on a single, inexpensive commodity 3D sensor with a large preexisting install base. We suggest that the proposed technique can be used for continuous gait tracking at home.",
"title": ""
},
{
"docid": "23919d976b6a25dc032fa23350195713",
"text": "I interactive multimedia technologies enable online firms to employ a variety of formats to present and promote their products: They can use pictures, videos, and sounds to depict products, as well as give consumers the opportunity to try out products virtually. Despite the several previous endeavors that studied the effects of different product presentation formats, the functional mechanisms underlying these presentation methods have not been investigated in a comprehensive way. This paper investigates a model showing how these functional mechanisms (namely, vividness and interactivity) influence consumers’ intentions to return to a website and their intentions to purchase products. A study conducted to test this model has largely confirmed our expectations: (1) both vividness and interactivity of product presentations are the primary design features that influence the efficacy of the presentations; (2) consumers’ perceptions of the diagnosticity of websites, their perceptions of the compatibility between online shopping and physical shopping, and their shopping enjoyment derived from a particular online shopping experience jointly influence consumers’ attitudes toward shopping at a website; and (3) both consumers’ attitudes toward products and their attitudes toward shopping at a website contribute to their intentions to purchase the products displayed on the website.",
"title": ""
},
{
"docid": "b684109a76a9c38e7d7e79df8c3e0b3a",
"text": "This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities, as an important application deployed on the Internet of Things (IoT) paradigm. The corresponding IoT sub-system includes sensor layer, communication layer, and application layer. A high-level view of the system architecture is outlined. To demonstrate the provision of car parking services with the proposed platform, a cloud-based intelligent car parking system for use within a University campus is described along with details of its design and implementation.",
"title": ""
},
{
"docid": "dd27b4cf6e0c9534f7a0b6e5e9e04b62",
"text": "We study the problem of active learning for multi-class classification on large-scale datasets. In this setting, the existing active learning approaches built upon uncertainty measures are ineffective for discovering unknown regions, and those based on expected error reduction are inefficient owing to their huge time costs. To overcome the above issues, this paper proposes a novel query selection criterion called approximated error reduction (AER). In AER, the error reduction of each candidate is estimated based on an expected impact over all datapoints and an approximated ratio between the error reduction and the impact over its nearby datapoints. In particular, we utilize hierarchical anchor graphs to construct the candidate set as well as the nearby datapoint sets of these candidates. The benefit of this strategy is that it enables a hierarchical expansion of candidates with the increase of labels, and allows us to further accelerate the AER estimation. We finally introduce AER into an efficient semi-supervised classifier for scalable active learning. Experiments on publicly available datasets with the sizes varying from thousands to millions demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "76c6ad5e97d5296a9be841c3d3552a27",
"text": "In fish as in mammals, virus infections induce changes in the expression of many host genes. Studies conducted during the last fifteen years revealed a major contribution of the interferon system in fish antiviral response. This review describes the screening methods applied to compare the impact of virus infections on the transcriptome in different fish species. These approaches identified a \"core\" set of genes that are strongly induced in most viral infections. The \"core\" interferon-induced genes (ISGs) are generally conserved in vertebrates, some of them inhibiting a wide range of viruses in mammals. A selection of ISGs -PKR, vig-1/viperin, Mx, ISG15 and finTRIMs - is further analyzed here to illustrate the diversity and complexity of the mechanisms involved in establishing an antiviral state. Most of the ISG-based pathways remain to be directly determined in fish. Fish ISGs are often duplicated and the functional specialization of multigenic families will be of particular interest for future studies.",
"title": ""
},
{
"docid": "dc64fa6178f46a561ef096fd2990ad3d",
"text": "Forest fires cost millions of dollars in damages and claim many human lives every year. Apart from preventive measures, early detection and suppression of fires is the only way to minimize the damages and casualties. We present the design and evaluation of a wireless sensor network for early detection of forest fires. We first present the key aspects in modeling forest fires. We do this by analyzing the Fire Weather Index (FWI) System, and show how its different components can be used in designing efficient fire detection systems. The FWI System is one of the most comprehensive forest fire danger rating systems in North America, and it is backed by several decades of forestry research. The analysis of the FWI System could be of interest in its own right to researchers working in the sensor network area and to sensor manufacturers who can optimize the communication and sensing modules of their products to better fit forest fire detection systems. Then, we model the forest fire detection problem as a coverage problem in wireless sensor networks, and we present a distributed algorithm to solve it. In addition, we show how our algorithm can achieve various coverage degrees at different subareas of the forest, which can be used to provide unequal monitoring quality of forest zones. Unequal monitoring is important to protect residential and industrial neighborhoods close to forests. Finally, we present a simple data aggregation scheme based on the FWI System. This data aggregation scheme significantly prolongs the network lifetime, because it only delivers the data that is of interest to the application. We validate several aspects of our design using simulation.",
"title": ""
},
{
"docid": "c3ae2b20405aa932bb5ada3874cdd29c",
"text": "In this letter, a novel compact quadrature hybrid using low-pass and high-pass lumped elements is proposed. This proposed topology enables significant circuit size reduction in comparison with former approaches applying microstrip branch line or Lange couplers. In addition, it provides wider bandwidth in terms of operational frequency, and provides more convenience to the monolithic microwave integrated circuit layout since it does not have any bulky via holes as compared to those with lumped elements that have been published. In addition, the simulation and measurement of the fabricated hybrid implemented using PHEMT processes are evidently good. With the operational bandwidth ranging from 25 to 30 GHz, the measured results of the return loss are better than 17.6 dB, and the insertion losses of coupled and direct ports are approximately 3.4plusmn0.7 dB, while the relative phase difference is approximately 92.3plusmn1.4deg. The core dimension of the circuit is 0.4 mm times 0.15 mm.",
"title": ""
},
{
"docid": "d597b9229a3f9a9c680d25180a4b6308",
"text": "Mental health problems are highly prevalent and increasing in frequency and severity among the college student population. The upsurge in mobile and wearable wireless technologies capable of intense, longitudinal tracking of individuals, provide enormously valuable opportunities in mental health research to examine temporal patterns and dynamic interactions of key variables. In this paper, we present an integrative framework for social anxiety and depression (SAD) monitoring, two of the most common disorders in the college student population. We have developed a smartphone application and the supporting infrastructure to collect both passive sensor data and active event-driven data. This supports intense, longitudinal, dynamic tracking of anxious and depressed college students to evaluate how their emotions and social behaviors change in the college campus environment. The data will provide critical information about how student mental health problems are maintained and, ultimately, how student patterns on campus shift following treatment.",
"title": ""
}
] |
scidocsrr
|
64a5327060e645d3f10787d8bf32ff57
|
A speech enhancement algorithm by iterating single- and multi-microphone processing and its application to robust ASR
|
[
{
"docid": "74ae28cf8b7f458b857b49748573709d",
"text": "Muscle fiber conduction velocity is based on the ti me delay estimation between electromyography recording channels. The aims of this study is to id entify the best estimator of generalized correlati on methods in the case where time delay is constant in order to extent these estimator to the time-varyin g delay case . The fractional part of time delay was c lculated by using parabolic interpolation. The re sults indicate that Eckart filter and Hannan Thomson (HT ) give the best results in the case where the signa l to noise ratio (SNR) is 0 dB.",
"title": ""
}
] |
[
{
"docid": "c04e3a28b6f3f527edae534101232701",
"text": "An intelligent interface for an information retrieval system has the aims of controlling an underlying information retrieval system di rectly interacting with the user and allowing him to retrieve relevant information without the support of a human intermediary Developing intelligent interfaces for information retrieval is a di cult activity and no well established models of the functions that such systems should possess are available Despite of this di culty many intelligent in terfaces for information retrieval have been implemented in the past years This paper surveys these systems with two aims to stand as a useful entry point for the existing literature and to sketch an ana lysis of the functionalities that an intelligent interface for information retrieval has to possess",
"title": ""
},
{
"docid": "8f4f687aff724496efcc37ff7f6bbbeb",
"text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.",
"title": ""
},
{
"docid": "236343ee0e1ddeb86f92fc4eea9ef145",
"text": "Introduction The coronary sinus (CS) is a critical structure that has helped aid electrophysiologists by providing information about epicardial activation along the mitral annulus. Additionally, the advent of biventricular pacing utilizes the CS venous system for lead placement. The CS collects the venous drainage of the atrial and ventricular venous branches and drains into the right atrium (RA). CS anomalies have been recognized, namely atresia of the CS ostium, unroofed CS, connection with the left atrium, and even hypoplasia of the CS. We report 3 cases of CS ostium opening in the midlateral RA. This anomaly may provide new insight into the development of the CS as well as provide an explanation for difficult cannulation of the CS.",
"title": ""
},
{
"docid": "adc70bff1b83bea1eb188d4b7c738211",
"text": "Code clones are serious code smells. To investigate bug-proneness of clones as opposed to clone-free source code, earlier attempts have studied the stability of code clones and their contributions to program faults. This paper presents a comparative study on different types of clones and non-cloned code on the basis of their vulnerabilities, which may lead to software defects and issues in future. The empirical study along this new dimension examines source code of 97 software systems and derives results based on quantitative analysis with statistical significance. The findings from this work add to our under-standing of the characteristics and impacts of clones, which can be useful in clone-aware software development and in devising techniques for minimizing the negative impacts of code clones.",
"title": ""
},
{
"docid": "5ea45a4376e228b3eacebb8dd8e290d2",
"text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).",
"title": ""
},
{
"docid": "70d820f14b4d30f03268e51db87e19f0",
"text": "Many emerging applications driven the fast development of the device-free localization DfL technique, which does not require the target to carry any wireless devices. Most current DfL approaches have two main drawbacks in practical applications. First, as the pre-calibrated received signal strength RSS in each location i.e., radio-map of a specific area cannot be directly applied to the new areas, the manual calibration for different areas will lead to a high human effort cost. Second, a large number of RSS are needed to accurately localize the targets, thus causes a high communication cost and the areas variety will further exacerbate this problem. This paper proposes FitLoc, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment. FitLoc unifies the radio-map over various areas through a rigorously designed transfer scheme, thus greatly reduces the human effort cost. Furthermore, benefiting from the compressive sensing theory, FitLoc collects a few RSS and performs a fine-grained localization, thus reduces the communication cost. Theoretical analyses validate the effectivity of the problem formulation and the bound of localization error is provided. Extensive experimental results illustrate the effectiveness and robustness of FitLoc.",
"title": ""
},
{
"docid": "ce1c06b2e0fde07f29a19bdbdd20a894",
"text": "Incumbent firms struggle with new forms of competition in today’s increasingly digital environments. To leverage the benefits of innovation ecosystems they often shift focus from products to platforms. However, existing research provides limited insight into how firms actually implement this shift. Addressing this void, we have conducted a comparative case study where we adopt the concept of platform thinking to comprehend what capabilities incumbents need when engaging in innovation ecosystems and how those capabilities are developed.",
"title": ""
},
{
"docid": "f07d9d733aee86d67aeb8a21070f7b04",
"text": "Trading communication with redundant computation can increase the silicon efficiency of FPGAs and GPUs in accelerating communication-bound sparse iterative solvers. While k iterations of the iterative solver can be unrolled to provide O(k) reduction in communication cost, the extent of this unrolling depends on the underlying architecture, its memory model, and the growth in redundant computation. This paper presents a systematic procedure to select this algorithmic parameter k, which provides communication-computation tradeoff on hardware accelerators like FPGA and GPU. We provide predictive models to understand this tradeoff and show how careful selection of k can lead to performance improvement that otherwise demands significant increase in memory bandwidth. On an Nvidia C2050 GPU, we demonstrate a 1.9×-42.6× speedup over standard iterative solvers for a range of benchmarks and that this speedup is limited by the growth in redundant computation. In contrast, for FPGAs, we present an architecture-aware algorithm that limits off-chip communication but allows communication between the processing cores. This reduces redundant computation and allows large k and hence higher speedups. Our approach for FPGA provides a 0.3×-4.4× speedup over same-generation GPU devices where k is picked carefully for both architectures for a range of benchmarks.",
"title": ""
},
{
"docid": "b8b3c053c95fbc3cb211ff4c9a4ced03",
"text": "We propose Scheduled Auxiliary Control (SACX), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors – from scratch – in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment – enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. A video of the rich set of learned behaviours can be found at https://youtu.be/mPKyvocNe M.",
"title": ""
},
{
"docid": "c4d610eb523833a2ded2b0090d6c0337",
"text": "In this paper, I argue that animal domestication, speciesism, and other modern human-animal interactions in North America are possible because of and through the erasure of Indigenous bodies and the emptying of Indigenous lands for settler-colonial expansion. That is, we cannot address animal oppression or talk about animal liberation without naming and subsequently dismantling settler colonialism and white supremacy as political machinations that require the simultaneous exploitation and/or erasure of animal and Indigenous bodies. I begin by re-framing animality as a politics of space to suggest that animal bodies are made intelligible in the settler imagination on stolen, colonized, and re-settled Indigenous lands. Thinking through Andrea Smith’s logics of white supremacy, I then re-center anthropocentrism as a racialized and speciesist site of settler coloniality to re-orient decolonial thought toward animality. To critique the ways in which Indigenous bodies and epistemologies are at stake in neoliberal re-figurings of animals as settler citizens, I reject the colonial politics of recognition developed in Sue Donaldson and Will Kymlicka’s recent monograph, Zoopolis: A Political Theory of Animal Rights (Oxford University Press 2011) because it militarizes settler-colonial infrastructures of subjecthood and governmentality. I then propose a decolonized animal ethic that finds legitimacy in Indigenous cosmologies to argue that decolonization can only be reified through a totalizing disruption of those power apparatuses (i.e., settler colonialism, anthropocentrism, white supremacy, and neoliberal pluralism) that lend the settler state sovereignty, normalcy, and futurity insofar as animality is a settler-colonial particularity.",
"title": ""
},
{
"docid": "f9ee82dcf1cce6d41a7f106436ee3a7d",
"text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.",
"title": ""
},
{
"docid": "e170be2a81d853ee3d81a9dd45528a20",
"text": "Hardware improvement of cybernetic human HRP-4C for entertainment is presented in this paper. We coined the word “Cybernetic Human” to explain a humanoid robot with a realistic head and a realistic figure of a human being. HRP-4C stands for Humanoid Robotics Platform-4 (Cybernetic human). Its joints and dimensions conform to average values of young Japanese females and HRP-4C looks very human-like. We have made HRP-4C present in several events to search for a possibility of use in the entertainment industry. Based on feedback from our experience, we improved its hardware. The new hand, the new foot with active toe joint, and the new eye with camera are introduced.",
"title": ""
},
{
"docid": "43628e18a38d6cc9134fcf598eae6700",
"text": "Purchase of dietary supplement products is increasing despite the lack of clinical evidence to support health needs for consumption. The purpose of this crosssectional study is to examine the factors influencing consumer purchase intention of dietary supplement products in Penang based on Theory of Planned Behaviour (TPB). 367 consumers were recruited from chain pharmacies and hypermarkets in Penang. From statistical analysis, the role of attitude differs from the original TPB model; attitude played a new role as the mediator in this dietary supplement products context. Findings concluded that subjective norms, importance of price and health consciousness affected dietary supplement products purchase intention indirectly through attitude formation, with 71.5% of the variance explained. Besides, significant differences were observed between dietary supplement products users and non-users in all variables. Dietary supplement product users have stronger intention to purchase dietary supplement products, more positive attitude, with stronger perceived social pressures to purchase, perceived more availability, place more importance of price and have higher level of health consciousness compared to nonusers. Therefore, in order to promote healthy living through natural ways, consumers’ attitude formation towards dietary supplement products should be the main focus. Policy maker, healthcare providers, educators, researchers and dietary supplement industry must be responsible and continue to work diligently to provide consumers with accurate dietary supplement products and healthy living information.",
"title": ""
},
{
"docid": "efab39b060adcabbe2bacd8df255b0fa",
"text": "Neural stem cells reside in the subventricular zone (SVZ) of the adult mammalian brain. This germinal region, which continually generates new neurons destined for the olfactory bulb, is composed of four cell types: migrating neuroblasts, immature precursors, astrocytes, and ependymal cells. Here we show that SVZ astrocytes, and not ependymal cells, remain labeled with proliferation markers after long survivals in adult mice. After elimination of immature precursors and neuroblasts by an antimitotic treatment, SVZ astrocytes divide to generate immature precursors and neuroblasts. Furthermore, in untreated mice, SVZ astrocytes specifically infected with a retrovirus give rise to new neurons in the olfactory bulb. Finally, we show that SVZ astrocytes give rise to cells that grow into multipotent neurospheres in vitro. We conclude that SVZ astrocytes act as neural stem cells in both the normal and regenerating brain.",
"title": ""
},
{
"docid": "dc93126fadf8801687573cbef29cdef1",
"text": "Many graph-based semi-supervised learning methods for large datasets have been proposed to cope with the rapidly increasing size of data, such as Anchor Graph Regularization (AGR). This model builds a regularization framework by exploring the underlying structure of the whole dataset with both datapoints and anchors. Nevertheless, AGR still has limitations in its two components: (1) in anchor graph construction, the estimation of the local weights between each datapoint and its neighboring anchors could be biased and relatively slow; and (2) in anchor graph regularization, the adjacency matrix that estimates the relationship between datapoints, is not sufficiently effective. In this paper, we develop an Efficient Anchor Graph Regularization (EAGR) by tackling these issues. First, we propose a fast local anchor embedding method, which reformulates the optimization of local weights and obtains an analytical solution. We show that this method better reconstructs datapoints with anchors and speeds up the optimizing process. Second, we propose a new adjacency matrix among anchors by considering the commonly linked datapoints, which leads to a more effective normalized graph Laplacian over anchors. We show that, with the novel local weight estimation and normalized graph Laplacian, EAGR is able to achieve better classification accuracy with much less computational costs. Experimental results on several publicly available datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "ece408df916581aa838f7991945d3586",
"text": "It is well-documented that most students do not have adequate proficiencies in inquiry and metacognition, particularly at deeper levels of comprehension that require explanatory reasoning. The proficiencies are not routinely provided by teachers and normal tutors so it is worthwhile to turn to computer-based learning environments. This article describes some of our recent computer systems that were designed to facilitate explanation-centered learning through strategies of inquiry and metacognition while students learn science and technology content. Point&Query augments hypertext, hypermedia, and other learning environments with question–answer facilities that are under the learner control. AutoTutor and iSTART use animated conversational agents to scaffold strategies of inquiry, metacognition, and explanation construction. AutoTutor coaches students in generating answers to questions that require explanations (e.g., why, what-if, how) by holding a mixed-initiative dialogue in natural language. iSTART models and coaches students in constructing self-explanations and in applying other metacomprehension strategies while reading text. These systems have shown promising results in tests of learning gains and learning strategies.",
"title": ""
},
{
"docid": "48a4d6b30131097d721905ae148a03dd",
"text": "68 AI MAGAZINE ■ I claim that achieving real human-level artificial intelligence would necessarily imply that most of the tasks that humans perform for pay could be automated. Rather than work toward this goal of automation by building special-purpose systems, I argue for the development of general-purpose, educable systems that can learn and be taught to perform any of the thousands of jobs that humans can perform. Joining others who have made similar proposals, I advocate beginning with a system that has minimal, although extensive, built-in capabilities. These would have to include the ability to improve through learning along with many other abilities.",
"title": ""
},
{
"docid": "2d7554e232542231039480173053ff8e",
"text": "Miniature roses growing in an ebb-and-flow watering system developed dieback during the summer growing season of 1996 in Gifu Prefecture. The main diagnostic symptoms were chlorosis of leaf followed by blight, and a brown, water-soaked root rot followed by dieback. Pythium isolates were recovered from the rotted root. The isolates form proliferous ellipsoidal papillate sporangia, spherical smooth oogonia, elongate antheridia, and aplerotic oospores. The optimum temperature for hyphal growth was 35°C with a growth rate of 34 mm/24 hr. Optimum temperature of zoospore formation (25-30°C) was lower than that of mycelial growth, and zoospores were produced even at 10°C. The isolates were identified as P. helicoides on the basis of these characteristics. In pathogenicity tests disease severity was highest at the highest tested temperature (35°C) at which the disease naturally occurred in summer. Four days after inoculation, the leaves turned yellow and the roots had a water-soaked rot, followed by leaf blight and root dieback after 7 days. The disease transmission test showed that diseased plants were found throughout the bench after 10 days.",
"title": ""
},
{
"docid": "14d901c9d33e1567643366d291c1b4ab",
"text": "In this paper, we propose and demonstrate the first fully integrated surface acoustic wave (SAW)-less superheterodyne receiver (RX) for 4G cellular applications. The RX operates in discrete-time domain and introduces various innovations to simultaneously improve noise and linearity performance while reducing power consumption: a highly linear wideband noise-canceling low-noise transconductance amplifier (LNTA), a blocker-resilient octal charge-sharing bandpass filter, and a cascaded harmonic rejection circuitry. The RX is implemented in 28-nm CMOS and it does not require any calibration. It features NF of 2.1-2.6 dB, an immeasurably high input second intercept point for closely-spaced or modulated interferers, and input third intercept point of 8-14 dBm, while drawing only 22-40 mW in various operating modes.",
"title": ""
}
] |
scidocsrr
|
b8f67c80c03b7ede172c500c02e47c68
|
DNN-based enhancement of noisy and reverberant speech
|
[
{
"docid": "97075bfa0524ad6251cefb2337814f32",
"text": "Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.",
"title": ""
}
] |
[
{
"docid": "d805537f8414273cb2211306a8b81935",
"text": "Optical Character Recognition which could be defined as the process of isolating textual scripts from a scanned document, is not in its 100% efficiency when it comes to a complex Dravidian language, Malayalam. Here, we present a different approach of combining n-gram segmentation along with geometric feature extraction methodology to train a Support Vector Machine in order to obtain a recognizing accuracy better than the existing methods. N-gram isolation has not been implemented so far for the curvy language Malayalam and thus such an approach gives a competence of 98% which uses Otsu Algorithm as its base. Highly efficient segmentation process gives better accuracy in feature extraction which is being fed as the input of SVM. The proposed tactic gives an adept output of 95.6% efficacy in recognizing Malayalam printed scripts and word snippets.",
"title": ""
},
{
"docid": "a068988ab0492dd617321c01a07b38ad",
"text": "Human activity recognition is a key task of many Internet of Things (IoT) applications to understand underlying contexts and react with the environments. Machine learning is widely exploited to identify the activities from sensor measurements, however, they are often overcomplex to run on less-powerful IoT devices. In this paper, we present an alternative approach to efficiently support the activity recognition tasks using brain-inspired hyperdimensional (HD) computing. We show how the HD computing method can be applied to the recognition problem in IoT systems while improving the accuracy and efficiency. In our evaluation conducted for three practical datasets, the proposed design achieves the speedup of the model training by up to 486x as compared to the state-of-the-art neural network training. In addition, our design improves the performance of the HD-based inference procedure by 7x on a low-power ARM processor.",
"title": ""
},
{
"docid": "6ffbb212bec4c90c6b37a9fde3fd0b4c",
"text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.",
"title": ""
},
{
"docid": "8159d3dea8c1a33c3a2c0500e4e00e88",
"text": "Sclera blood veins have been investigated recently as a biometric trait which can be used in a recognition system. The sclera is the white and opaque outer protective part of the eye. This part of the eye has visible blood veins which are randomly distributed. This feature makes these blood veins a promising factor for eye recognition. The sclera has an advantage in that it can be captured using a visible-wavelength camera. Therefore, applications which may involve the sclera are wide ranging. The contribution of this paper is the design of a robust sclera recognition system with high accuracy. The system comprises of new sclera segmentation and occluded eye detection methods. We also propose an efficient method for vessel enhancement, extraction, and binarization. In the feature extraction and matching process stages, we additionally develop an efficient method, that is, orientation, scale, illumination, and deformation invariant. The obtained results using UBIRIS.v1 and UTIRIS databases show an advantage in terms of segmentation accuracy and computational complexity compared with state-of-the-art methods due to Thomas, Oh, Zhou, and Das.",
"title": ""
},
{
"docid": "f8c6906f4d0deb812e42aaaff457a6d9",
"text": "By the early 1900s, Euro-Americans had extirpated gray wolves (Canis lupus) from most of the contiguous United States. Yellowstone National Park was not immune to wolf persecution and by the mid-1920s they were gone. After seven decades of absence in the park, gray wolves were reintroduced in 1995–1996, again completing the large predator guild (Smith et al. 2003). Yellowstone’s ‘‘experiment in time’’ thus provides a rare opportunity for studying potential cascading effects associated with the extirpation and subsequent reintroduction of an apex predator. Wolves represent a particularly important predator of large mammalian prey in northern hemisphere ecosystems by virtue of their group hunting and year-round activity (Peterson et al. 2003) and can have broad top-down effects on the structure and functioning of these systems (Miller et al. 2001, Soulé et al. 2003, Ray et al. 2005). If a tri-trophic cascade involving wolves–elk (Cervus elaphus)–plants is again underway in northern Yellowstone, theory would suggest two primary mechanisms: (1) density mediation through prey mortality and (2) trait mediation involving changes in prey vigilance, habitat use, and other behaviors (Brown et al. 1999, Berger 2010). Both predator-caused reductions in prey numbers and fear responses they elicit in prey can lead to cascading trophic-level effects across a wide range of biomes (Beschta and Ripple 2009, Laundré et al. 2010, Terborgh and Estes 2010). Thus, the occurrence of a trophic cascade could have important implications not only to the future structure and functioning of northern Yellowstone’s ecosystems but also for other portions of the western United States where wolves have been reintroduced, are expanding their range, or remain absent. However, attempting to identify the occurrence of a trophic cascade in systems with large mammalian predators, as well as the relative importance of density and behavioral mediation, represents a continuing scientific challenge. In Yellowstone today, there is an ongoing effort by various researchers to evaluate ecosystem processes in the park’s two northern ungulate winter ranges: (1) the ‘‘Northern Range’’ along the northern edge of the park (NRC 2002, Barmore 2003) and (2) the ‘‘Upper Gallatin Winter Range’’ along the northwestern corner of the park (Ripple and Beschta 2004b). Previous studies in northern Yellowstone have generally found that elk, in the absence of wolves, caused a decrease in aspen (Populus tremuloides) recruitment (i.e., the growth of seedlings or root sprouts above the browse level of elk). Within this context, Kauffman et al. (2010) initiated a study to provide additional understanding of factors such as elk density, elk behavior, and climate upon historical and contemporary patterns of aspen recruitment in the park’s Northern Range. Like previous studies, Kauffman et al. (2010) concluded that, irrespective of historical climatic conditions, elk have had a major impact on long-term aspen communities after the extirpation of wolves. But, unlike other studies that have seen improvement in the growth or recruitment of young aspen and other browse species in recent years, Kauffman et al. (2010) concluded in their Abstract: ‘‘. . . our estimates of relative survivorship of young browsable aspen indicate that aspen are not currently recovering in Yellowstone, even in the presence of a large wolf population.’’ In the interest of clarifying the potential role of wolves on woody plant community dynamics in Yellowstone’s northern winter ranges, we offer several counterpoints to the conclusions of Kauffman et al. (2010). We do so by readdressing several tasks identified in their Introduction (p. 2744): (1) the history of aspen recruitment failure, (2) contemporary aspen recruitment, and (3) aspen recruitment and predation risk. Task 1 covers the period when wolves were absent from Yellowstone and tasks 2 and 3 focus on the period when wolves were again present. We also include some closing comments regarding trophic cascades and ecosystem recovery. 1. History of aspen recruitment failure.—Although records of wolf and elk populations in northern Yellowstone are fragmentary for the early 1900s, the Northern Range elk population averaged ;10 900 animals (7.3 elk/km; Fig. 1A) as the last wolves were being removed in the mid 1920s. Soon thereafter increased browsing by elk of aspen and other woody species was noted in northern Yellowstone’s winter ranges (e.g., Rush 1932, Lovaas 1970). In an attempt to reduce the effects this large herbivore was having on vegetation, soils, and wildlife habitat in the Northern Manuscript received 13 January 2011; revised 10 June 2011; accepted 20 June 2011. Corresponding Editor: C. C. Wilmers. 1 Department of Forest Ecosystems and Society, Oregon State University, Corvallis, Oregon 97331 USA. 2 E-mail: Robert.Beschta@oregonstate.edu",
"title": ""
},
{
"docid": "c94773bbf38d73f0fefc4dc621d42c66",
"text": "A recommender system is useful for a digital library to suggest the books that are likely preferred by a user. Most recommender systems using collaborative filtering approaches leverage the explicit user ratings to make personalized recommendations. However, many users are reluctant to provide explicit ratings, so ratings-oriented recommender systems do not work well. In this paper, we present a recommender system for CADAL digital library, namely CARES, which makes recommendations using a ranking-oriented collaborative filtering approach based on users' access logs, avoiding the problem of the lack of user ratings. Our approach employs mean AP correlation coefficients for computing similarities among users' implicit preference models and a random walk based algorithm for generating a book ranking personalized for the individual. Experimental results on real access logs from the CADAL web site show the effectiveness of our system and the impact of different values of parameters on the recommendation performance.",
"title": ""
},
{
"docid": "a75b221dc4d95fb7dfe7e581b254ae4d",
"text": "To address the pressing need to provide transparency into the online targeted advertising ecosystem, we present AdReveal, a practical measurement and analysis framework, that provides a first look at the prevalence of different ad targeting mechanisms. We design and implement a browser based tool that provides detailed measurements of online display ads, and develop analysis techniques to characterize the contextual, behavioral and re-marketing based targeting mechanisms used by advertisers. Our analysis is based on a large dataset consisting of measurements from 103K webpages and 139K display ads. Our results show that advertisers frequently target users based on their online interests; almost half of the ad categories employ behavioral targeting. Ads related to Insurance, Real Estate and Travel and Tourism make extensive use of behavioral targeting. Furthermore, up to 65% of ad categories received by users are behaviorally targeted. Finally, our analysis of re-marketing shows that it is adopted by a wide range of websites and the most commonly targeted re-marketing based ads are from the Travel and Tourism and Shopping categories.",
"title": ""
},
{
"docid": "313ded9d63967fd0c8bc6ca164ce064a",
"text": "This paper presents a 0.35-mum SiGe BiCMOS VCO IC exhibiting a linear VCO gain (Kvco) for 5-GHz band application. To realize a linear Kvco, a novel resonant circuit is proposed. The measured Kvco changes from 224 MHz/V to 341 MHz/V. The ratio of the maximum Kvco to the minimum one is 1.5 which is less than one-half of that of a conventional VCO. The VCO oscillation frequency range is from 5.45 GHz to 5.95 GHz, the tuning range is 8.8 %, and the dc current consumption is 3.4 mA at a supply voltage of 3.0 V. The measured phase noise is -116 dBc/Hz at 1MHz offset, which is similar to the conventional VCO",
"title": ""
},
{
"docid": "4c19656f8809c67dd4d8f62b300dcbf0",
"text": "This paper explores the moderating role of perceived compatibility on the relationship between e-learning system use and its outcomes. We conceptualize e-learning outcomes using academic performance, perceived learning assistance, and perceived community building assistance. We further hypothesize that perceived compatibility moderates the relationships between e-learning system use and these outcome variables. The model was tested by collecting data from university students (n = 179) participating in hybrid courses using a popular learning management system, Moodle. The findings suggest that perceived compatibility moderates the relationship between e-learning system use and academic performance. However, it did not moderate the other two relationships, i.e. (1) the relationship between e-learning system use and perceived learning assistance, and (2) the relationship between e-learning system use and perceived community building assistance. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e7c455d50a53b405905ff22ce9d05bb9",
"text": "Assessing multi-hop interpersonal trust in online social networks (OSNs) is critical for many social network applications such as online marketing but challenging due to the difficulties of handling complex OSN topology, in existing models such as subjective logic, and the lack of effective validation methods. To address these challenges, we for the first time properly define trust propagation and combination in arbitrary OSN topologies by proposing 3VSL (Three-Valued Subjective Logic). The 3VSL distinguishes the posteriori and priori uncertainties existing in trust, and the difference between distorting and original opinions, thus be able to compute multi-hop trusts in arbitrary graphs. We theoretically proved the capability based on the Dirichlet distribution. Furthermore, an online survey system is implemented to collect interpersonal trust data and validate the correctness and accuracy of 3VSL in real world. Both experimental and numerical results show that 3VSL is accurate in computing interpersonal trust in OSNs.",
"title": ""
},
{
"docid": "8d25142e22d00415b66689607f986c3c",
"text": "This paper addresses the assist-as-needed (AAN) control problem for robotic orthoses. The objective is to design a stable AAN controller with an adjustable assistance level. The controller aims to follow a desired trajectory while allowing an adjustable tracking error with low control effort to provide a freedom zone for the user. By ensuring the stability of the system and providing the freedom zone, the controller combines the advantages of both model-based and non-model-based AAN controllers existing in the literature. Furthermore, the controller provides a priori bounded control command, and includes an adaptive neural network term to compensate for the uncertainties of dynamic model of the system, mainly when a precise tracking is of interest. The stability of the closed-loop system is well analysed based on the Lyapunov method. The effectiveness of the proposed control scheme is validated through experiments using a lower extremity robotic exoskeleton.",
"title": ""
},
{
"docid": "8f5af0964740d734cc03a4bfa030ee48",
"text": "In present scenario, the security concerns have grown tremendously. The security of restricted areas such as borders or buffer zones is of utmost importance; in particular with the worldwide increase of military conflicts, illegal immigrants, and terrorism over the past decade. Monitoring such areas rely currently on technology and man power, however automatic monitoring has been advancing in order to avoid potential human errors that can be caused by different reasons. The purpose of this project is to design a surveillance system which would detect motion in a live video feed and record the video feed only at the moment where the motion was detected also to track moving object based on background subtraction using video surveillance. The moving object is identified using the image subtraction method.",
"title": ""
},
{
"docid": "82bdaf46188ffa0e2bd555aadaa0957c",
"text": "Smart pills were originally developed for diagnosis; however, they are increasingly being applied to therapy - more specifically drug delivery. In addition to smart drug delivery systems, current research is also looking into localization systems for reaching the target areas, novel locomotion mechanisms and positioning systems. Focusing on the major application fields of such devices, this article reviews smart pills developed for local drug delivery. The review begins with the analysis of the medical needs and socio-economic benefits associated with the use of such devices and moves onto the discussion of the main implemented technological solutions with special attention given to locomotion systems, drug delivery systems and power supply. Finally, desired technical features of a fully autonomous robotic capsule for local drug delivery are defined and future research trends are highlighted.",
"title": ""
},
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "7f02090e896afacd6b70537c03956078",
"text": "Although the literature on Asian Americans and racism has been emerging, few studies have examined how coping influences one's encounters with racism. To advance the literature, the present study focused on the psychological impact of Filipino Americans' experiences with racism and the role of coping as a mediator using a community-based sample of adults (N = 199). Two multiple mediation models were used to examine the mediating effects of active, avoidance, support-seeking, and forbearance coping on the relationship between perceived racism and psychological distress and self-esteem, respectively. Separate analyses were also conducted for men and women given differences in coping utilization. For men, a bootstrap procedure indicated that active, support-seeking, and avoidance coping were mediators of the relationship between perceived racism and psychological distress. Active coping was negatively associated with psychological distress, whereas both support seeking and avoidance were positively associated with psychological distress. A second bootstrap procedure for men indicated that active and avoidance coping mediated the relationship between perceived racism and self-esteem such that active coping was positively associated with self-esteem, and avoidance was negatively associated with self-esteem. For women, only avoidance coping had a significant mediating effect that was associated with elevations in psychological distress and decreases in self-esteem. The results highlight the importance of examining the efficacy of specific coping responses to racism and the need to differentiate between the experiences of men and women.",
"title": ""
},
{
"docid": "ee732b213767471c29f12e7d00f4ded3",
"text": "The increasing interest in scene text reading in multilingual environments raises the need to recognize and distinguish between different writing systems. In this paper, we propose a novel method for script identification in scene text using triplets of local convolutional features in combination with the traditional bag-of-visual-words model. Feature triplets are created by making combinations of descriptors extracted from local patches of the input images using a convolutional neural network. This approach allows us to generate a more descriptive codeword dictionary for the bag-of-visual-words model, as the low discriminative power of weak descriptors is enhanced by other descriptors in a triplet. The proposed method is evaluated on two public benchmark datasets for scene text script identification and a public dataset for script identification in video captions. The experiments demonstrate that our method outperforms the baseline and yields competitive results on all three datasets.",
"title": ""
},
{
"docid": "3d8daed65bfd41a3610627e896837a4a",
"text": "BACKGROUND\nDrug-resistant tuberculosis threatens recent gains in the treatment of tuberculosis and human immunodeficiency virus (HIV) infection worldwide. A widespread epidemic of extensively drug-resistant (XDR) tuberculosis is occurring in South Africa, where cases have increased substantially since 2002. The factors driving this rapid increase have not been fully elucidated, but such knowledge is needed to guide public health interventions.\n\n\nMETHODS\nWe conducted a prospective study involving 404 participants in KwaZulu-Natal Province, South Africa, with a diagnosis of XDR tuberculosis between 2011 and 2014. Interviews and medical-record reviews were used to elicit information on the participants' history of tuberculosis and HIV infection, hospitalizations, and social networks. Mycobacterium tuberculosis isolates underwent insertion sequence (IS)6110 restriction-fragment-length polymorphism analysis, targeted gene sequencing, and whole-genome sequencing. We used clinical and genotypic case definitions to calculate the proportion of cases of XDR tuberculosis that were due to inadequate treatment of multidrug-resistant (MDR) tuberculosis (i.e., acquired resistance) versus those that were due to transmission (i.e., transmitted resistance). We used social-network analysis to identify community and hospital locations of transmission.\n\n\nRESULTS\nOf the 404 participants, 311 (77%) had HIV infection; the median CD4+ count was 340 cells per cubic millimeter (interquartile range, 117 to 431). A total of 280 participants (69%) had never received treatment for MDR tuberculosis. Genotypic analysis in 386 participants revealed that 323 (84%) belonged to 1 of 31 clusters. Clusters ranged from 2 to 14 participants, except for 1 large cluster of 212 participants (55%) with a LAM4/KZN strain. Person-to-person or hospital-based epidemiologic links were identified in 123 of 404 participants (30%).\n\n\nCONCLUSIONS\nThe majority of cases of XDR tuberculosis in KwaZulu-Natal, South Africa, an area with a high tuberculosis burden, were probably due to transmission rather than to inadequate treatment of MDR tuberculosis. These data suggest that control of the epidemic of drug-resistant tuberculosis requires an increased focus on interrupting transmission. (Funded by the National Institute of Allergy and Infectious Diseases and others.).",
"title": ""
},
{
"docid": "5fe4a9e1ef0ba8b98d410e48764acfc3",
"text": "We report an ethnographic study of prosocial behavior inconnection to League of Legends, one of the most popular games in the world. In this game community, the game developer, Riot Games, implemented a system that allowed players to volunteer their time to identify unacceptable player behaviors and punish players associated with these behaviors. With the prosocial goal of improving the community and promoting sportsmanship with in the competitive culture, a small portion of players worked diligently in the system with little reward. In this paper, we use interviews and analysis of forum discussions to examine how players themselves explain their participation in the system situated in the game culture of League of Legends. We show a myriad of social and technical factors that facilitated or hindered players' prosocial behavior. We discuss how our findings might provide generalizable insights for player engagement and community-building in online games.",
"title": ""
},
{
"docid": "9c0b58f0a2a71052fc4349ba750d6ce4",
"text": "The ability to comprehend wishes or desires and their fulfillment is important to Natural Language Understanding. This paper introduces the task of identifying if a desire expressed by a subject in a given short piece of text was fulfilled. We propose various unstructured and structured models that capture fulfillment cues such as the subject’s emotional state and actions. Our experiments with two different datasets demonstrate the importance of understanding the narrative and discourse structure to address this task.",
"title": ""
},
{
"docid": "93e43e11c10e39880c68d2fb0fccd634",
"text": "In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.",
"title": ""
}
] |
scidocsrr
|
0d72fe1d09e5976d62695919296709ea
|
Modeling bug report quality
|
[
{
"docid": "3442445fac9efd3acdd9931739aca189",
"text": "“Avoidable rework” is effort spent fixing difficulties with the software that could have been avoided or discovered earlier and less expensively. This definition implies that there is such thing as “unavoidable rework”. Reducing “avoidable rework” is a major source of software productivity improvement and most effort savings from improving software processes, architectures and risk management are results of reductions in “avoidable rework”.",
"title": ""
},
{
"docid": "222f28aa8b4cc4eaddb21e21c9020593",
"text": "We study an approach to text categorization that combines di stributional clustering of words and a Support Vector Machine (SVM) classifier. This word-cluster r presentation is computed using the recently introducedInformation Bottleneckmethod, which generates a compact and efficient representation of documents. When combined with the classifica tion power of the SVM, this method yields high performance in text categorization. This novel combination of SVM with word-cluster representation is compared with SVM-based categorization using the simpler bag-of-words (BOW) representation. The comparison is performed over three kno wn datasets. On one of these datasets (the 20 Newsgroups) the method based on word clusters signifi ca tly outperforms the word-based representation in terms of categorization accuracy or repr esentation efficiency. On the two other sets (Reuters-21578 and WebKB) the word-based representation s lightly outperforms the word-cluster representation. We investigate the potential reasons for t his behavior and relate it to structural differences between the datasets.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "2b471e61a6b95221d9ca9c740660a726",
"text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.",
"title": ""
}
] |
[
{
"docid": "a3becbdfd3c14eaa9d270ac6479e9d28",
"text": "To develop a robust classification algorithm in the adversarial setting, it is important to understand the adversary ’ strategy. We address the problem of label flips attack where an adversar y contaminates the training set through flipping labels. By analy zing the objective of the adversary, we formulate an optimization fr amework for finding the label flips that maximize the classification er ror. An algorithm for attacking support vector machines is derived . Experiments demonstrate that the accuracy of classifiers is signi fica tly degraded under the attack.",
"title": ""
},
{
"docid": "6a51e7a1b32a844160ba6a0e3b329b46",
"text": "We present an overview of the current pharmacological treatment of urinary incontinence (UI) in women, according to the latest evidence available. After a brief description of the lower urinary tract receptors and mediators (detrusor, bladder neck, and urethra), the potential sites of pharmacological manipulation in the treatment of UI are discussed. Each class of drug used to treat UI has been evaluated, taking into account published rate of effectiveness, different doses, and way of administration. The prevalence of the most common adverse effects and overall compliance had also been pointed out, with cost evaluation after 1 month of treatment for each class of drug. Moreover, we describe those newer agents whose efficacy and safety need to be further investigated. We stress the importance of a better understanding of the causes and pathophysiology of UI to ensure newer and safer treatments for such a debilitating condition.",
"title": ""
},
{
"docid": "d4a4c4a1d933488ab686097e18b4373a",
"text": "Psychological stress is an important factor for the development of irritable bowel syndrome (IBS). More and more clinical and experimental evidence showed that IBS is a combination of irritable bowel and irritable brain. In the present review we discuss the potential role of psychological stress in the pathogenesis of IBS and provide comprehensive approaches in clinical treatment. Evidence from clinical and experimental studies showed that psychological stresses have marked impact on intestinal sensitivity, motility, secretion and permeability, and the underlying mechanism has a close correlation with mucosal immune activation, alterations in central nervous system, peripheral neurons and gastrointestinal microbiota. Stress-induced alterations in neuro-endocrine-immune pathways acts on the gut-brain axis and microbiota-gut-brain axis, and cause symptom flare-ups or exaggeration in IBS. IBS is a stress-sensitive disorder, therefore, the treatment of IBS should focus on managing stress and stress-induced responses. Now, non-pharmacological approaches and pharmacological strategies that target on stress-related alterations, such as antidepressants, antipsychotics, miscellaneous agents, 5-HT synthesis inhibitors, selective 5-HT reuptake inhibitors, and specific 5-HT receptor antagonists or agonists have shown a critical role in IBS management. A integrative approach for IBS management is a necessary.",
"title": ""
},
{
"docid": "a53ab7039d47df6ee2f0de06ab069774",
"text": "Today's handheld mobile devices with advanced multimedia capabilities and wireless broadband connectivity have emerged as potential new tools for journalists to produce news articles. It is envisioned that they could enable faster, more authentic, and more efficient news production, and many large news producing organizations, including Reuters and BBC, have recently been experimenting with them. In this paper, we present a field study on using mobile devices to produce news articles. During the study, a group of 19 M.A.-level journalism students used the Mobile Journalist Toolkit, a lightweight set of tools for mobile journalist work built around the Nokia N82 camera phone, to produce an online news blog. Our results indicate that while the mobile device cannot completely replace the traditional tools, for some types of journalist tasks they provide major benefits over the traditional tools, and are thus a useful addition to the journalist's toolbox.",
"title": ""
},
{
"docid": "79811b3cfec543470941e9529dc0ab24",
"text": "We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate representation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support informationsharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attributebased approach to affordance prediction.",
"title": ""
},
{
"docid": "c4fcd7db5f5ba480d7b3ecc46bef29f6",
"text": "In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.",
"title": ""
},
{
"docid": "19bef93ef428aac9fb25dac2889c4d6a",
"text": "This paper presents an efficient unipolar stochastic computing hardware for convolutional neural networks (CNNs). It includes stochastic ReLU and optimized max function, which are key components in a CNN. To avoid the range limitation problem of stochastic numbers and increase the signal-to-noise ratio, we perform weight normalization and upscaling. In addition, to reduce the overhead of binary-to-stochastic conversion, we propose a scheme for sharing stochastic number generators among the neurons in a CNN. Experimental results show that our approach outperforms the previous ones based on stochastic computing in terms of accuracy, area, and energy consumption.",
"title": ""
},
{
"docid": "12f8414a2cadd222c31805de8bb3ed87",
"text": "In this paper we explore functions of bounded variation. We discuss properties of functions of bounded variation and consider three related topics. The related topics are absolute continuity, arc length, and the Riemann-Stieltjes integral.",
"title": ""
},
{
"docid": "fe6e8b7e4a15e7f188b9f0b2bf4b765b",
"text": "Due to the availability of many sophisticated image processing tools, a digital image forgery is nowadays very often used. One of the common forgery method is a copy-move forgery, where part of an image is copied to another location in the same image with the aim of hiding or adding some image content. Numerous algorithms have been proposed for a copy-move forgery detection (CMFD), but there exist only few benchmarking databases for algorithms evaluation. We developed new database for a CMFD that consist of 260 forged image sets. Every image set includes forged image, two masks and original image. Images are grouped in 5 categories according to applied manipulation: translation, rotation, scaling, combination and distortion. Also, postprocessing methods, such as JPEG compression, blurring, noise adding, color reduction etc., are applied at all forged and original images. In this paper we present database organization and content, creation of forged images, postprocessing methods, and database testing. CoMoFoD database is available at http://www.vcl.fer.hr/comofod.",
"title": ""
},
{
"docid": "4cfd7fab35e081f2d6f81ec23c4d0d18",
"text": "In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.",
"title": ""
},
{
"docid": "7825ace1376c7f7ab3ed98ee5fda11d1",
"text": "In this paper, Arabic was investigated from the speech recognition problem point of view. We propose a novel approach to build an Arabic automated speech recognition system using Arabic environment. The system, based on the open source CMU Sphinx-4, was trained using Arabic characters.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "54f8df63208cf72cfda9a3a01f87d3dc",
"text": "7124 | P a g e C o u n c i l f o r I n n o v a t i v e R e s e a r c h J u l y , 2 0 1 6 w w w . c i r w o r l d . c o m AN IMPLEMENTATION OF LOAD BALANCING ALGORITHM IN CLOUD ENVIRONMENT Sheenam Kamboj , Mr. Navtej Singh Ghumman (2) (1) Research Scholar, Department of Computer Science & Engineering, SBSSTC, Ferozepur, Punjab. sheenam31.sk@gmail.com (2) Assistant Professor, Department of Computer Science & Engineering, SBSSTC, Ferozepur, Punjab. navtejghumman@yahoo.com ABSTRACT",
"title": ""
},
{
"docid": "8d985fefb4d8168b3f63a56f26211939",
"text": "Electronic word of mouth is available to customers in different types of online consumer reviews, which can be used to help them make e-commerce purchasing decisions. Customers acknowledge that online consumer reviews help them to determine eWOM credibility and to make purchasing decisions. This study uses surveys and multiple regression analysis to create an extended Elaboration Likelihood Model that describes the relationship between customer expertise, involvement, and rapport to acceptance and use of electronic word of mouth in making purchasing decisions. The study focuses on the cultural effects of gender on the extended Elaboration Likelihood Model and purchasing decisions in e-commerce virtual communities. Study results show that involvement has the most significant effect on perceived eWOM credibility. Study results show that perceived eWOM credibility has a significant effect on eWOM acceptance and intent to purchase. Study results also show the male customers have different e-commerce shopping behaviors than female customers.",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "47c6de1c81b484204abfbd1f070ad03f",
"text": "Ti-based metal-organic frameworks (MOFs) are demonstrated as promising photosensitizers for photoelectrochemical (PEC) water splitting. Photocurrents of TiO2 nano wire photoelectrodes can be improved under visible light through sensitization with aminated Ti-based MOFs. As a host, other sensitizers or catalysts such as Au nanoparticles can be incorporated into the MOF layer thus further improving the PEC water splitting efficiency.",
"title": ""
},
{
"docid": "73a656b220c8f91ad1b2e2b4dbd691a9",
"text": "Music recommendation systems are well explored and commonly used but are normally based on manually tagged parameters and simple similarity calculation. Our project proposes a recommendation system based on emotional computing, automatic classification and feature extraction, which recommends music based on the emotion expressed by the song.\n To achieve this goal a set of features is extracted from the song, including the MFCC (mel-frequency cepstral coefficients) following the works of McKinney et al. [6] and a machine learning system is trained on a set of 424 songs, which are categorized by emotion. The categorization of the song is performed manually by multiple persons to avoid error. The emotional categorization is performed using a modified version of the Tellegen-Watson-Clark emotion model [7], as proposed by Trohidis et al. [8]. The System is intended as desktop application that can reliably determine similarities between the main emotion in multiple pieces of music, allowing the user to choose music by emotion. We report our findings below.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "51251e955a53d46c4609875e2224dd00",
"text": "The theoretical, practical and technical development of neural associative memories during the last 40 years is described. The importance of sparse coding of associative memory patterns is pointed out. The use of associative memory networks for large scale brain modeling is also mentioned.",
"title": ""
},
{
"docid": "3083f89003757dcaf70d6e013084e53a",
"text": "Hard Disk Drives (HDD s) sometimes fail with no apparent reason; some SMART (Self-Monitoring, Analysis and Reporting Technology) attributes present strong correlations with drive failures[3], yet a drive may also fail without (supposedly) any previous indication. Most of the host systems today utilize alert-methods which are reactive by nature-a drive is indicated to fail when some SMART attribute exceeds its vendor defined threshold for valid operation[4]. This approach does not take the cross correlation between different attributes into account and the fact that thresholds vary across different vendors.\n Unlike more conventional studies that focused on reliability statistics such as the annualized failure rate (AFR) at the population level[5], [6], we use machine learning (ML) algorithms that attempt to predict the failure of individual drives. Former ML approaches applied to the drive prediction failure domain include methods for dealing with sequential data, such as sliding windows and hidden Markov models [1], or anomaly detection algorithms, adhering to the often-low proportion of failed drives in the population [4] and [2].\n We present a mechanism that performs a sophisticated samples aggregation inside a distributed database, allowing for the efficient extraction of compound features representing the behavior of the drive during a continuous time window prior its failure.\n We use here an open source dataset from BACKBLAZE, comprising extensive SMART information collected daily from a large drive population at its data center Thus we can use not only the last cumulative SMART counts but also include new features extracted over different time windows of drives last operational days. These capture the dynamics of the drives attributes, for example their growth rate which is highly indicative of the drive s failure probability [7]. Our results suggest that the use of compound features reduce the amount of false positives, which is primary performance measure for algorithms in the drive reliability domain[3], by as much as 60%.\n In a consecutive evaluation scenario a final decision is made based on a collection of the most recent data samples. This way we are able to capture more soon-to-fail drives at low cost to the false-positive rate. On average, a drive is predicted to fail long enough in advance (30 days) to allow for the modification of business strategy in fields such as drive replacement logistics. This approach, when implemented in a production environment may have a direct effect on business savings related to drive logistics.",
"title": ""
}
] |
scidocsrr
|
7f848f1d6b9f1a632a6b2e2c414ce16b
|
The ecological impacts of marine debris: unraveling the demonstrated evidence from what is perceived.
|
[
{
"docid": "db974cd371b791682bf62c4c238ccee2",
"text": "We analyzed polybrominated diphenyl ethers (PBDEs) in abdominal adipose of oceanic seabirds (short-tailed shearwaters, Puffinus tenuirostris) collected in northern North Pacific Ocean. In 3 of 12 birds, we detected higher-brominated congeners (viz., BDE209 and BDE183), which are not present in the natural prey (pelagic fish) of the birds. The same compounds were present in plastic found in the stomachs of the 3 birds. These data suggested the transfer of plastic-derived chemicals from ingested plastics to the tissues of marine-based organisms.",
"title": ""
}
] |
[
{
"docid": "731df77ded13276e7bdb9f67474f3810",
"text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.",
"title": ""
},
{
"docid": "c213dd0989659d413b39e6698eb097cc",
"text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the the major transitions in evolution. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"title": ""
},
{
"docid": "498c217fb910a5b4ca6bcdc83f98c11b",
"text": "Theodor Wilhelm Engelmann (1843–1909), who had a creative life in music, muscle physiology, and microbiology, developed a sensitive method for tracing the photosynthetic oxygen production of unicellular plants by means of bacterial aerotaxis (chemotaxis). He discovered the absorption spectrum of bacteriopurpurin (bacteriochlorophyll a) and the scotophobic response, photokinesis, and photosynthesis of purple bacteria.",
"title": ""
},
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "d337cf524cf9c59149bb8e7eba6ef33a",
"text": "Twelve years after the Kikwit Ebola outbreak in 1995, Ebola virus reemerged in the Occidental Kasaï province of the Democratic Republic of Congo (DRC) between May and November 2007, affecting more than 260 humans and causing 186 deaths. During this latter outbreak we conducted several epidemiological investigations to identify the underlying ecological conditions and animal sources. Qualitative social and environmental data were collected through interviews with villagers and by direct observation. The local populations reported no unusual morbidity or mortality among wild or domestic animals, but they described a massive annual fruit bat migration toward the southeast, up the Lulua River. Migrating bats settled in the outbreak area for several weeks, between April and May, nestling in the numerous fruit trees in Ndongo and Koumelele islands as well as in palm trees of a largely abandoned plantation. They were massively hunted by villagers, for whom they represented a major source of protein. By tracing back the initial human-human transmission events, we were able to show that, in May, the putative first human victim bought freshly killed bats from hunters to eat. We were able to reconstruct the likely initial human-human transmission events that preceded the outbreak. This study provides the most likely sequence of events linking a human Ebola outbreak to exposure to fruit bats, a putative virus reservoir. These findings support the suspected role of bats in the natural cycle of Ebola virus and indicate that the massive seasonal fruit bat migrations should be taken into account in operational Ebola risk maps and seasonal alerts in the DRC.",
"title": ""
},
{
"docid": "0f26e233ad7d7f91681d53d6b13943a6",
"text": "Thanks to its potential in many applications, Blockchain has recently been nominated as one of the technologies exciting intense attention. Blockchain has solved the problem of changing the original low-trust centralized ledger held by a single third-party, to a high-trust decentralized form held by different entities, or in other words, verifying nodes. The key contribution of the work of Blockchain is the consensus algorithm, which decides how agreement is made to append a new block between all nodes in the verifying network. Blockchain algorithms can be categorized into two main groups. The first group is proof-based consensus, which requires the nodes joining the verifying network to show that they are more qualified than the others to do the appending work. The second group is voting-based consensus, which requires nodes in the network to exchange their results of verifying a new block or transaction, before making the final decision. In this paper, we present a review of the Blockchain consensus algorithms that have been researched and that are being applied in some well-known applications at this time.",
"title": ""
},
{
"docid": "28a86caf1d86c58941f72c71699fabb1",
"text": "Dicing of ultrathin (e.g. <; 75um thick) “via-middle” 3DI/TSV semiconductor wafers proves to be challenging because the process flow requires the dicing step to occur after wafer thinning and back side processing. This eliminates the possibility of using any type of “dice-before-grind” techniques. In addition, the presence of back side alignment marks, TSVs, or other features in the dicing street can add challenges for the dicing process. In this presentation, we will review different dicing processes used for 3DI/TSV via-middle products. Examples showing the optimization process for a 3DI/TSV memory device wafer product are provided.",
"title": ""
},
{
"docid": "105913d67437afafa6147b7c67e8d808",
"text": "This paper proposes to develop an electronic device for obstacle detection in the path of visually impaired people. This device assists a user to walk without colliding with any obstacles in their path. It is a wearable device in the form of a waist belt that has ultrasonic sensors and raspberry pi installed on it. This device detects obstacles around the user up to 500cm in three directions i.e. front, left and right using a network of ultrasonic sensors. These ultrasonic sensors are connected to raspberry pi that receives data signals from these sensors for further data processing. The algorithm running in raspberry pi computes the distance from the obstacle and converts it into text message, which is then converted into speech and conveyed to the user through earphones/speakers. This design is benefitial in terms of it’s portability, low-cost, low power consumption and the fact that neither the user nor the device requires initial training. Keywords—embedded systems; raspberry pi; speech feedback; ultrasonic sensor; visually impaired;",
"title": ""
},
{
"docid": "461786442ec8b8762019bb82d65491a5",
"text": "Fog computing is a new paradigm providing network services such as computing, storage between the end users and cloud. The distributed and open structure are the characteristics of fog computing, which make it vulnerable and very weak to security threats. In this article, the interaction between vulnerable nodes and malicious nodes in the fog computing is investigated as a non-cooperative differential game. The complex decision making process is reviewed and analyzed. To solve the game, a fictitious play-based algorithm is which the vulnerable node and the malicious nodes reach a feedback Nash equilibrium. We attain optimal strategy of energy consumption with QoS guarantee for the system, which are conveniently operated and suitable for fog nodes. The system simulation identifies the propagation of malicious nodes. We also determine the effects of various parameters on the optimal strategy. The simulation results support a theoretical foundation to limit malicious nodes in fog computing, which can help fog service providers make the optimal dynamic strategies when different types of nodes dynamically change their strategies.",
"title": ""
},
{
"docid": "3240607824a6dace92925e75df92cc09",
"text": "We propose a framework to model general guillotine restrictions in two-dimensional cutting problems formulated as Mixed Integer Linear Programs (MIP). The modeling framework requires a pseudo-polynomial number of variables and constraints, which can be effectively enumerated for medium-size instances. Our modeling of general guillotine cuts is the first one that, once it is implemented within a state-of-the-art MIP solver, can tackle instances of challenging size. We mainly concentrate our analysis on the Guillotine Two Dimensional Knapsack Problem (G2KP), for which a model, and an exact procedure able to significantly improve the computational performance, are given. We also show how the modeling of general guillotine cuts can be extended to other relevant problems such as the Guillotine Two Dimensional Cutting Stock Problem (G2CSP) and the Guillotine Strip Packing Problem (GSPP). Finally, we conclude the paper discussing an extensive set of computational experiments on G2KP and GSPP benchmark instances from the literature.",
"title": ""
},
{
"docid": "7e682f98ee6323cd257fda07504cba20",
"text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods",
"title": ""
},
{
"docid": "c450da231d3c3ec8410fe621f4ced54a",
"text": "Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort. However, this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data. This work proposes a novel method for detecting potential false negative training examples using a knowledge inference method. Results show that our approach improves the performance of relation extraction systems trained using distantly supervised data.",
"title": ""
},
{
"docid": "dbab8fdd07b1180ba425badbd1616bb2",
"text": "The proliferation of cyber-physical systems introduces the fourth stage of industrialization, commonly known as Industry 4.0. The vertical integration of various components inside a factory to implement a flexible and reconfigurable manufacturing system, i.e., smart factory, is one of the key features of Industry 4.0. In this paper, we present a smart factory framework that incorporates industrial network, cloud, and supervisory control terminals with smart shop-floor objects such as machines, conveyers, and products. Then, we provide a classification of the smart objects into various types of agents and define a coordinator in the cloud. The autonomous decision and distributed cooperation between agents lead to high flexibility. Moreover, this kind of self-organized system leverages the feedback and coordination by the central coordinator in order to achieve high efficiency. Thus, the smart factory is characterized by a self-organized multi-agent system assisted with big data based feedback and coordination. Based on this model, we propose an intelligent negotiation mechanism for agents to cooperate with each other. Furthermore, the study illustrates that complementary strategies can be designed to prevent deadlocks by improving the agents’ decision making and the coordinator’s behavior. The simulation results assess the effectiveness of the proposed negotiation mechanism and deadlock prevention strategies. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "aca217edab9f2727bfd5f0be6e174ca1",
"text": "To develop a model of students' debugging processes, I conducted a qualitative analysis of young students engaged in debugging computer programs they had written in the programming language Scratch. I present a microgenetic analysis that tracks how one student's attention to elements of computer program state shifted during his debugging process. I present evidence that this student had relevant domain knowledge and claim that his changing attention within the problem, and not his domain knowledge, mediated his debugging process. I hypothesize that a key competence in debugging is learning to identify what elements of program state are important to pay attention to and that this attention, and not only domain knowledge, mediates the debugging process. This hypothesis is consistent with a model of physics reasoning and learning from the Knowledge in Pieces theoretical framework and in this research I build upon education research outside of computer science. The case study analyzes the debugging process of a student entering the sixth grade, but I document an isomorphic case from a pair of college students to show that this pattern extends beyond this age.",
"title": ""
},
{
"docid": "02a276b26400fe37804298601b16bc13",
"text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.",
"title": ""
},
{
"docid": "52ca3ab904821ed9baf514127a4d10e8",
"text": "Johnson, S. (1996). Down’s syndrome screening in the UK, Lancet, 347, 906–907. Pandya, P.P., Snijders, R.J.M., Johnson, S.P., de Loudes Brizot, M., Nicolaides, K.H. (1995). Screening for fetal trisomies by maternal age and fetal nuchal translucency thickness at 10 to 14 weeks of gestation, Br. J. Obstet. Gynaecol., 102, 957–962. Schuchter, K., Wald, N.J., Hackshaw, A.K., Hafner, E., Liebhardt, E. (1998). The distribution of nuchal translucency at 10–13 weeks of pregnancy, Prenat. Diagn., 18, 281–286. Wald, N.J., George, L., Smith, D., Densem, J.W., Petterson, K. (1996). On behalf of the International Prenatal Screening Research Group. Serum screening for Down’s syndrome between 8 and 14 weeks of pregnancy, Br. J. Obstet. Gynaecol., 103, 407– 412. Wald, N.J., Hackshaw, A.K. (1997). Combining ultrasound and biochemistry in first-trimester screening for Down’s syndrome, Prenat. Diagn., 17, 821–829. Wald, N.J., Kennard, A., Hackshaw, A., McGuire, A. (1997). Antenatal screening for Down’s syndrome, J. Med. Screren, 4, 181–246. Wald, N.J., Stone, R., Cuckle, H.S., Grudzinskas, J.G., Barkai, G., Brambati, B., Teisner, B., Fuhrmann, W. (1992). First trimester concentrations of PAPP-A and placental protein 14 in Down’s syndrome, BMJ, 305, 28.",
"title": ""
},
{
"docid": "bf7679eedfe88210b70105d50ae8acf4",
"text": "Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs (see Figure 1).",
"title": ""
},
{
"docid": "6c2317957daf4f51354114de62f660a1",
"text": "This paper proposes a framework for recognizing complex human activities in videos. Our method describes human activities in a hierarchical discriminative model that operates at three semantic levels. At the lower level, body poses are encoded in a representative but discriminative pose dictionary. At the intermediate level, encoded poses span a space where simple human actions are composed. At the highest level, our model captures temporal and spatial compositions of actions into complex human activities. Our human activity classifier simultaneously models which body parts are relevant to the action of interest as well as their appearance and composition using a discriminative approach. By formulating model learning in a max-margin framework, our approach achieves powerful multi-class discrimination while providing useful annotations at the intermediate semantic level. We show how our hierarchical compositional model provides natural handling of occlusions. To evaluate the effectiveness of our proposed framework, we introduce a new dataset of composed human activities. We provide empirical evidence that our method achieves state-of-the-art activity classification performance on several benchmark datasets.",
"title": ""
},
{
"docid": "eb2e440b20fa3a3d99f70f4b89f6c216",
"text": "The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.",
"title": ""
},
{
"docid": "0b135f95bfcccf34c75959a41a0a7fe6",
"text": "Analogy is a kind of similarity in which the same system of relations holds across different objects. Analogies thus capture parallels across different situations. When such a common structure is found, then what is known about one situation can be used to infer new information about the other. This chapter describes the processes involved in analogical reasoning, reviews foundational research and recent developments in the field, and proposes new avenues of investigation.",
"title": ""
}
] |
scidocsrr
|
e6c6ef8e628b7f0d7b8433c2ea2d1b50
|
Climate of scepticism : US newspaper coverage of the science of climate change
|
[
{
"docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3",
"text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "aaebd4defcc22d6b1e8e617ab7f3ec70",
"text": "In the American political process, news discourse concerning public policy issues is carefully constructed. This occurs in part because both politicians and interest groups take an increasingly proactive approach to amplify their views of what an issue is about. However, news media also play an active role in framing public policy issues. Thus, in this article, news discourse is conceived as a sociocognitive process involving all three players: sources, journalists, and audience members operating in the universe of shared culture and on the basis of socially defined roles. Framing analysis is presented as a constructivist approach to examine news discourse with the primary focus on conceptualizing news texts into empirically operationalizable dimensions—syntactical, script, thematic, and rhetorical structures—so that evidence of the news media's framing of issues in news texts may be gathered. This is considered an initial step toward analyzing the news discourse process as a whole. Finally, an extended empirical example is provided to illustrate the applications of this conceptual framework of news texts.",
"title": ""
}
] |
[
{
"docid": "1898536161383682f22126c59e185047",
"text": "E-mail foldering or e-mail classification into user predefined folders can be viewed as a text classification/categorization problem. However, it has some intrinsic properties that make it more difficult to deal with, mainly the large cardinality of the class variable (i.e. the number of folders), the different number of e-mails per class state and the fact that this is a dynamic problem, in the sense that e-mails arrive in our mail-forders following a time-line. Perhaps because of these problems, standard text-oriented classifiers such as Naive Bayes Multinomial do no obtain a good accuracy when applied to e-mail corpora. In this paper, we identify the imbalance among classes/folders as the main problem, and propose a new method based on learning and sampling probability distributions. Our experiments over a standard corpus (ENRON) with seven datasets (e-mail users) show that the results obtained by Naive Bayes Multinomial significantly improve when applying the balancing algorithm first. For the sake of completeness in our experimental study we also compare this with another standard balancing method (SMOTE) and classifiers.",
"title": ""
},
{
"docid": "106915eaac271c255aef1f1390577c64",
"text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.",
"title": ""
},
{
"docid": "41098050e76786afbb892d4cd1ffaad2",
"text": "Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.",
"title": ""
},
{
"docid": "24743c98daddd3bc733921c643e723b9",
"text": "In this work, inspired by two different approaches available in the literature, we present two ways of nonlinear control for the attitude of a quadrotor unmanned aerial vehicle (UAV) : the first one is based on backstepping and the second one is developed directly on the special orthogonal group, SO(3), using the Lyapunov stability theory. In order to prove the advantages of these nonlinear controllers, they will be compared with a proporcional derivative (PD) and a linear quadratic regulator (LQR) controllers, which are the typical solutions for controlling the quadrotor attitude. About the attitude estimation, a set of sensors composed by a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer will be used and several estimators based on the Kalman Filter will be studied. Once the full model is developed (made up of the quadrotor motion, actuators and sensors models) and a simulator is built, two levels of control will be implemented in a cascade control configuration: a low level control, for stabilizing or tracking attitude and altitude, and a high level control (by means of an horizontal guidance controller) for tracking a desired path in an horizontal plane. Our simulation shows that the PD controller is not very reliable working with estimators, and that the nonlinear controllers present the best performace, although the LQR controller has also a quite acceptable behaviour.",
"title": ""
},
{
"docid": "92684148cd7d2a6a21657918015343b0",
"text": "Radiative wireless power transfer (WPT) is a promising technology to provide cost-effective and real-time power supplies to wireless devices. Although radiative WPT shares many similar characteristics with the extensively studied wireless information transfer or communication, they also differ significantly in terms of design objectives, transmitter/receiver architectures and hardware constraints, and so on. In this paper, we first give an overview on the various WPT technologies, the historical development of the radiative WPT technology and the main challenges in designing contemporary radiative WPT systems. Then, we focus on the state-of-the-art communication and signal processing techniques that can be applied to tackle these challenges. Topics discussed include energy harvester modeling, energy beamforming for WPT, channel acquisition, power region characterization in multi-user WPT, waveform design with linear and non-linear energy receiver model, safety and health issues of WPT, massive multiple-input multiple-output and millimeter wave enabled WPT, wireless charging control, and wireless power and communication systems co-design. We also point out directions that are promising for future research.",
"title": ""
},
{
"docid": "82f029ebcca0216bccfdb21ab13ac593",
"text": "Presently, middleware technologies abound for the Internet-of-Things (IoT), directed at hiding the complexity of underlying technologies and easing the use and management of IoT resources. The middleware solutions of today are capable technologies, which provide much advanced services and that are built using superior architectural models, they however fail short in some important aspects: existing middleware do not properly activate the link between diverse applications with much different monitoring purposes and many disparate sensing networks that are of heterogeneous nature and geographically dispersed. Then, current middleware are unfit to provide some system-wide global arrangement (intelligence, routing, data delivery) emerging from the behaviors of the constituent nodes, rather than from the coordination of single elements, i.e. self-organization. This paper presents the SIMPLE self-organized and intelligent middleware platform. SIMPLE middleware innovates from current state-of-research exactly by exhibiting self-organization properties, a focus on data-dissemination using multi-level subscriptions processing and a tiered networking approach able to cope with many disparate, widespread and heterogeneous sensing networks (e.g. WSN). In this way, the SIMLE middleware is provided as robust zero-configuration technology, with no central dependable system, immune to failures, and able to efficiently deliver the right data at the right time, to needing applications.",
"title": ""
},
{
"docid": "b3b02767cdf765b46a26f79c26730503",
"text": "In the last decade, computational models that distinguish semantic relations have become crucial for many applications in Natural Language Processing (NLP), such as machine translation, question answering, sentiment analysis, and so on. These computational models typically distinguish semantic relations by either representing semantically related words as vector representations in the vector space, or using neural networks to classify semantic relations. In this thesis, we mainly focus on the improvement of such computational models. Specifically, the goal of this thesis is to address the tasks of distinguishing antonymy, synonymy, and hypernymy. For the task of distinguishing antonymy and synonymy, we propose two approaches. In the first approach, we focus on improving both families of word vector representations, which are distributional and distributed vector representations. Regarding the improvement of distributional vector representation, we propose a novel weighted feature for constructing word vectors by relying on distributional lexical contrast, a feature capable of differentiating between antonymy and synonymy. In terms of the improvement of distributed vector representations, we propose a neural model to learn word vectors by integrating distributional lexical contrast into the objective function of the neural model. The resulting word vectors can distinguish antonymy from synonymy and predict degrees of word similarity. In the second approach, we aim to use lexico-syntactic patterns to classify antonymy and synonymy. To do so, we propose two pattern-based neural networks to distinguish antonymy from synonymy. The lexico-syntactic patterns are induced from the syntactic parse trees and then encoded as vector representations by neural networks. As a result, the two pattern-based neural networks improve performance over prior pattern-based methods. For the tasks of distinguishing hypernymy, we propose a novel neural model to learn hierarchical embeddings for hypernymy detection and directionality. The hierarchical embeddings are learned according to two underlying aspects (i) that the similarity of hypernymy is higher than similarity of other relations, and (ii) that the distributional hierarchy is generated between hyponyms and hypernyms. The experimental results show that hierarchical embeddings significantly outperform state-of-the-art word embeddings. In order to improve word embeddings for measuring semantic similarity and relatedness, we propose two neural models to learn word denoising embeddings by filtering noise from original word embeddings without using any external resources. Two proposed neural models receive original word embeddings as inputs and learn denoising matrices to filter noise from original word embeddings. Word denoising embeddings achieve the improvement against original word embeddings over tasks of semantic similarity and relatedness. Furthermore, rather than using English, we also shift the focus on evaluating the performance of computational models to Vietnamese. To that effect, we introduce two novel datasets of (dis-)similarity and relatedness for Vietnamese. We then make use of computational models to verify the two datasets and to observe their performance in being adapted to Vietnamese. The results show that computational models exhibit similar behaviour in the two Vietnamese datasets as in the corresponding English datasets.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "54611986a25fd54539d7f5419d77e0d8",
"text": "Advanced metering infrastructure (AMI) is an important component for a smart grid system to measure, collect, store, analyze, and operate users consumption data. The need of communication and data transmission between consumers (smart meters) and utilities make AMI vulnerable to various attacks. In this paper, we focus on distributed denial of service attack in the AMI network. We introduce honeypots into the AMI network as a decoy system to detect and gather attack information. We analyze the interactions between the attackers and the defenders, and derive optimal strategies for both sides. We further prove the existence of several Bayesian-Nash equilibriums in the honeypot game. Finally, we evaluate our proposals on an AMI testbed in the smart grid, and the results show that our proposed strategy is effective in improving the efficiency of defense with the deployment of honeypots.",
"title": ""
},
{
"docid": "6f80ca376936dc6f682a3a16587d87b3",
"text": "System Dynamics is often used to explore issues that are characterised by uncertainties. This paper discusses first of all different types of uncertainties that system dynamicists need to deal with and the tools they already use to deal with these uncertainties. From this discussion it is concluded that stand-alone System Dynamics is often not sufficient to deal with uncertainties. Then, two venues for improving the capacity of System Dynamics to deal with uncertainties are discussed, in both cases, by matching System Dynamics with other method(ologie)s: first with Multi-Attribute Multiple Criteria Decision Analysis, and finally with Exploratory Modelling.",
"title": ""
},
{
"docid": "4aa7df0faff824301ec1cbbab32ab09c",
"text": "As mobile services are shifting from connection- centric communications to content-centric communications, content-centric wireless networking emerges as a promising paradigm to evolve the current network architecture. Caching popular content at the wireless edge, including base stations and user terminals, provides an effective approach to alleviate the heavy burden on backhaul links, as well as lower delays and deployment costs. In contrast to wired networks, a unique characteristic of content-centric wireless networks (CCWNs) is the mobility of mobile users. While it has rarely been considered by existing works on caching design, user mobility contains various helpful side information that can be exploited to improve caching efficiency at both BSs and user terminals. In this article, we present a general framework for mobility-aware caching in CCWNs. Key properties of user mobility patterns that are useful for content caching are first identified, and then different design methodologies for mobility-aware caching are proposed. Moreover, two design examples are provided to illustrate the proposed framework in detail, and interesting future research directions are identified.",
"title": ""
},
{
"docid": "3c89e7c5fdd2269ffb17adcaec237d6c",
"text": "Numerical simulation of quantum systems is crucial to further our understanding of natural phenomena. Many systems of key interest and importance, in areas such as superconducting materials and quantum chemistry, are thought to be described by models which we cannot solve with sufficient accuracy, neither analytically nor numerically with classical computers. Using a quantum computer to simulate such quantum systems has been viewed as a key application of quantum computation from the very beginning of the field in the 1980s. Moreover, useful results beyond the reach of classical computation are expected to be accessible with fewer than a hundred qubits, making quantum simulation potentially one of the earliest practical applications of quantum computers. In this paper we survey the theoretical and experimental development of quantum simulation using quantum computers, from the first ideas to the intense research efforts currently underway.",
"title": ""
},
{
"docid": "cbd52c9d8473a81b92fcdd740326613f",
"text": "Optimizing decisions has become a vital factor for companies. In order to be able to evaluate beforehand the impact of a decision, managers need reliable previsional systems. Though data warehouses enable analysis of past data, they are not capable of giving anticipations of future trends. What-if analysis fills this gap by enabling users to simulate and inspect the behavior of a complex system under some given hypotheses. A crucial issue in the design of what-if applications is to find an adequate formalism to conceptually express the underlying simulation model. In this paper the authors report on how, within the framework of a comprehensive design methodology, this can be accomplished by extending UML 2 with a set of stereotypes. Their proposal is centered on the use of activity diagrams enriched with object flows, aimed at expressing functional, dynamic, and static aspects in an integrated fashion. The paper is completed by examples taken from a real case study in the commercial area. DOI: 10.4018/jdwm.2009080702 IGI PUBLISHING This paper appears in the publication, International Journal of Data Warehousing and Mining, Volume 5, Issue 4 edited by David Taniar © 2009, IGI Global 701 E. Chocolate Avenue, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.igi-global.com ITJ 5290 International Journal of Data Warehousing and Mining, 5(4), 24-43, October-December 2009 25 Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The BI pyramid demonstrates that data warehouses, that have been playing a lead role within BI platforms in supporting the decision process over the last decade, are no more than the starting point for the application of more advanced techniques that aim at building a bridge to the real decision-making process. This is because data warehouses are aimed at enabling analysis of past data, but they are not capable of giving anticipations of future trends. Indeed, in order to be able to evaluate beforehand the impact of a strategic or tactical move, decision makers need reliable previsional systems. So, almost at the top of the BI pyramid, what-if analysis comes into play. What-if analysis is a data-intensive simulation whose goal is to inspect the behavior of a complex system (i.e., the enterprise business or a part of it) under some given hypotheses called scenarios. More pragmatically, what-if analysis measures how changes in a set of independent variables impact on a set of dependent variables with reference to a simulation model offering a simplified representation of the business, designed to display significant features of the business and tuned according to the historical enterprise data (Kellern et al., 1999). Example 1: A simple example of what-if query in the marketing domain is: How would my profits change if I run a 3×2 (pay 2, take 3) promotion for one week on all audio products on sale? Answering this query requires a simulation model to be built. This model, that must be capable of expressing the complex relationships between the business variables that determine the impact of promotions on product sales, is then run against the historical sale data in order to determine a reliable forecast for future sales. Among the killer applications for what-if analysis, it is worth mentioning profitability analysis in commerce, hazard analysis in finance, promotion analysis in marketing, and effectiveness analysis in production planning (Rizzi, 2009b). Less traditional, yet interesting applications described in the literature are urban and regional planning supported by spatial databases, index selection in relational databases, and ETL maintenance in data warehousing systems. Surprisingly, though a few commercial tools are already capable of performing forecasting and what-if analysis, very few attempts have been made so far outside the simulation community to address methodological and modeling issues in this field (Golfarelli et al., 2006). On the other hand, facing a what-if project without the support of a design methodology is very time-consuming, and does not adequately protect designers and customers against the risk of failure. Figure 1. The business intelligence pyramid 18 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/simulation-modeling-businessintelligence/37403?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Select, InfoSci-Knowledge Discovery, Information Management, and Storage eJournal Collection, InfoSci-Surveillance, Security, and Defense eJournal Collection, InfoSci-Journal Disciplines Engineering, Natural, and Physical Science, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSciSelect. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "6ac202a4897d400a60b72dc660ead142",
"text": "This paper proposes a simple yet highly accurate system for the recognition or unconstrained handwritten numerals. It starts with an examination of the basic characteristic loci (CL) features used along with a nearest neighbor classifier achieving a recognition rate of 90.5%. We then illustrate how the basic CL implementation can be extended and used in conjunction with a multilayer perception neural network classifier to increase the recognition rate to 98%. This proposed recognition system was tested on a totally unconstrained handwritten numeral database while training it with only 600 samples exclusive from the test set. An accuracy exceeding 98% is also expected if a larger training set is used. Lastly, to demonstrate the effectiveness of the system its performance is also compared to that of some other common recognition schemes. These systems use moment Invariants as features along with nearest neighbor classification schemes.",
"title": ""
},
{
"docid": "c2f340d9ac07783680b5dc96d1e26ae9",
"text": "Transportation plays a significant role in carbon dioxide (CO2) emissions, accounting for approximately a third of the United States’ inventory. In order to reduce CO2 emissions in the future, transportation policy makers are looking to make vehicles more efficient and increasing the use of carbon-neutral alternative fuels. In addition, CO2 emissions can be lowered by improving traffic operations, specifically through the reduction of traffic congestion. This paper examines traffic congestion and its impact on CO2 emissions using detailed energy and emission models and linking them to real-world driving patterns and traffic conditions. Using a typical traffic condition in Southern California as example, it has been found that CO2 emissions can be reduced by up to almost 20% through three different strategies: 1) congestion mitigation strategies that reduce severe congestion, allowing traffic to flow at better speeds; 2) speed management techniques that reduce excessively high free-flow speeds to more moderate conditions; and 3) shock wave suppression techniques that eliminate the acceleration/deceleration events associated with stop-and-go traffic that exists during congested conditions. Barth/Boriboonsomsin 3",
"title": ""
},
{
"docid": "e53de7a588d61f513a77573b7b27f514",
"text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.",
"title": ""
},
{
"docid": "0f699e9f14753b2cbfb7f7a3c7057f40",
"text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1",
"title": ""
},
{
"docid": "9cdddf98d24d100c752ea9d2b368bb77",
"text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.",
"title": ""
},
{
"docid": "99afae20842a366501f09b29ad78fa21",
"text": "Automated program repair recently received considerable attentions, and many techniques on this research area have been proposed. Among them, two genetic-programming-based techniques, GenProg and Par, have shown the promising results. In particular, GenProg has been used as the baseline technique to check the repair effectiveness of new techniques in much literature. Although GenProg and Par have shown their strong ability of fixing real-life bugs in nontrivial programs, to what extent GenProg and Par can benefit from genetic programming, used by them to guide the patch search process, is still unknown. \n To address the question, we present a new automated repair technique using random search, which is commonly considered much simpler than genetic programming, and implement a prototype tool called RSRepair. Experiment on 7 programs with 24 versions shipping with real-life bugs suggests that RSRepair, in most cases (23/24), outperforms GenProg in terms of both repair effectiveness (requiring fewer patch trials) and efficiency (requiring fewer test case executions), justifying the stronger strength of random search over genetic programming. According to experimental results, we suggest that every proposed technique using optimization algorithm should check its effectiveness by comparing it with random search.",
"title": ""
}
] |
scidocsrr
|
8842e07b29f651b15e2d22f202c484ec
|
Machine Reading with Background Knowledge
|
[
{
"docid": "bf6a69245dcb757cd79663c8803dc645",
"text": "Despite its substantial coverage, NomBank does not account for all withinsentence arguments and ignores extrasentential arguments altogether. These arguments, which we call implicit, are important to semantic processing, and their recovery could potentially benefit many NLP applications. We present a study of implicit arguments for a select group of frequent nominal predicates. We show that implicit arguments are pervasive for these predicates, adding 65% to the coverage of NomBank. We demonstrate the feasibility of recovering implicit arguments with a supervised classification model. Our results and analyses provide a baseline for future work on this emerging task.",
"title": ""
},
{
"docid": "5d63815adaad5d2c1b80ddd125157842",
"text": "We consider the problem of building scalable semantic parsers for Freebase, and present a new approach for learning to do partial analyses that ground as much of the input text as possible without requiring that all content words be mapped to Freebase concepts. We study this problem on two newly introduced large-scale noun phrase datasets, and present a new semantic parsing model and semi-supervised learning approach for reasoning with partial ontological support. Experiments demonstrate strong performance on two tasks: referring expression resolution and entity attribute extraction. In both cases, the partial analyses allow us to improve precision over strong baselines, while parsing many phrases that would be ignored by existing techniques.",
"title": ""
}
] |
[
{
"docid": "45879e14f7fe6fe527739d74595b46dd",
"text": "Malware is one of the most damaging security threats facing the Internet today. Despite the burgeoning literature, accurate detection of malware remains an elusive and challenging endeavor due to the increasing usage of payload encryption and sophisticated obfuscation methods. Also, the large variety of malware classes coupled with their rapid proliferation and polymorphic capabilities and imperfections of real-world data (noise, missing values, etc) continue to hinder the use of more sophisticated detection algorithms. This paper presents a novel machine learning based framework to detect known and newly emerging malware at a high precision using layer 3 and layer 4 network traffic features. The framework leverages the accuracy of supervised classification in detecting known classes with the adaptability of unsupervised learning in detecting new classes. It also introduces a tree-based feature transformation to overcome issues due to imperfections of the data and to construct more informative features for the malware detection task. We demonstrate the effectiveness of the framework using real network data from a large Internet service provider.",
"title": ""
},
{
"docid": "c8e029658bf4c298cb6e77128d19eac0",
"text": "Cloud Computing Business Framework (CCBF) is proposed to help organisations achieve good Cloud design, deployment, migration and services. While organisations adopt Cloud Computing for Web Services, technical and business challenges emerge and one of these includes the measurement of Cloud business performance. Organisational Sustainability Modelling (OSM) is a new way to measure Cloud business performance quantitatively and accurately. It combines statistical computation and 3D Visualisation to present the Return on Investment arising from the adoption of Cloud Computing by organisations. 3D visualisation simplifies the review process and is an innovative way for Return of Investment (ROI) valuation. Two detailed case studies with SAP and Vodafone have been presented, where OSM has analysed the business performance and explained how CCBF offers insights, which are relatively helpful for WS and Grid businesses. Comparisons and discussions between CCBF and other approaches related to WS are presented, where lessons learned are useful for Web Services, Cloud and Grid communities.",
"title": ""
},
{
"docid": "5e7d5a86a007efd5d31e386c862fef5c",
"text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.",
"title": ""
},
{
"docid": "a645943a02f5d71b146afe705fb6f49f",
"text": "Along with the developments in the field of information technologies, the data in the electronic environment is increasing. Data mining methods are needed to obtain useful information for users in electronic environment. One of these methods, clustering methods, aims to group data according to common properties. This grouping is often based on the distance between the data. Clustering methods are divided into hierarchical and non-hierarchical methods according to the fragmentation technique of clusters. The success of both types of clustering methods varies according to the data set applied. In this study, both types of methods were tested on different type of data sets. Selected methods compared according to five different evaluation metrics. The results of the analysis are presented comparatively at the end of the study and which methods are more convenient for data set is explained.",
"title": ""
},
{
"docid": "827c9d65c2c3a2a39d07c9df7a21cfe2",
"text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.",
"title": ""
},
{
"docid": "0533a5382c58c8714f442784b5596258",
"text": "Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.",
"title": ""
},
{
"docid": "3f5b4c9d6da1a6e7949169e8613e6e03",
"text": "This study set out to investigate in which type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the recipient of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were also more likely to tell harsh truths to people not well-known to them via email.",
"title": ""
},
{
"docid": "622b0d9526dfee6abe3a605fa83e92ed",
"text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.",
"title": ""
},
{
"docid": "c1305b1ccc199126a52c6a2b038e24d1",
"text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "451110458791809898c854991a073119",
"text": "This paper considers the problem of face detection in first attempt using haar cascade classifier from images containing simple and complex backgrounds. It is one of the best detector in terms of reliability and speed. Experiments were carried out on standard database i.e. Indian face database (IFD) and Caltech database. All images are frontal face images because side face views are harder to detect with this technique. Opencv 2.4.2 is used to implement the haar cascade classifier. We achieved 100% face detection rate on Indian database containing simple background and 93.24% detection rate on Caltech database containing complex background. Haar cascade classifier provides high accuracy even the images are highly affected by the illumination. The haar cascade classifier has shown superior performance with simple background images.",
"title": ""
},
{
"docid": "c0e2d1740bbe2c40e7acf262cb658ea2",
"text": "The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. In this paper we propose a novel and generally applicable form of attention mechanism that learns high-order correlations between various data modalities. We show that high-order correlations effectively direct the appropriate attention to the relevant elements in the different data modalities that are required to solve the joint task. We demonstrate the effectiveness of our high-order attention mechanism on the task of visual question answering (VQA), where we achieve state-of-the-art performance on the standard VQA dataset.",
"title": ""
},
{
"docid": "10e24047026cc4a062b08fc28468bbff",
"text": "This comparative analysis of teacher-student interaction in two different instructional settings at the elementary-school level (18.3 hr in French immersion and 14.8 hr Japanese immersion) investigates the immediate effects of explicit correction, recasts, and prompts on learner uptake and repair. The results clearly show a predominant provision of recasts over prompts and explicit correction, regardless of instructional setting, but distinctively varied student uptake and repair patterns in relation to feedback type, with the largest proportion of repair resulting from prompts in French immersion and from recasts in Japanese immersion. Based on these findings and supported by an analysis of each instructional setting’s overall communicative orientation, we introduce the counterbalance hypothesis, which states that instructional activities and interactional feedback that act as a counterbalance to a classroom’s predominant communicative orientation are likely to prove more effective than instructional activities and interactional feedback that are congruent with its predominant communicative orientation.",
"title": ""
},
{
"docid": "4b57afbcbab770a30eba8ca2f2e15c1d",
"text": "Cats are strict carnivores and in the wild rely on a diet solely based on animal tissues to meet their specific and unique nutritional requirements. Although the feeding ecology of cats in the wild has been well documented in the literature, there is no information on the precise nutrient profile to which the cat's metabolism has adapted. The present study aimed to derive the dietary nutrient profile of free-living cats. Studies reporting the feeding habits of cats in the wild were reviewed and data on the nutrient composition of the consumed prey items obtained from the literature. Fifty-five studies reported feeding strategy data of cats in the wild. After specific exclusion criteria, twenty-seven studies were used to derive thirty individual dietary nutrient profiles. The results show that feral cats are obligatory carnivores, with their daily energy intake from crude protein being 52 %, from crude fat 46 % and from N-free extract only 2 %. Minerals and trace elements are consumed in relatively high concentrations compared with recommended allowances determined using empirical methods. The calculated nutrient profile may be considered the nutrient intake to which the cat's metabolic system has adapted. The present study provides insight into the nutritive, as well as possible non-nutritive aspects of a natural diet of whole prey for cats and provides novel ways to further improve feline diets to increase health and longevity.",
"title": ""
},
{
"docid": "bfd57465a5d6f85fb55ffe13ef79f3a5",
"text": "We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.",
"title": ""
},
{
"docid": "e52c1b09ba51166ed62bbec680979cfa",
"text": "When simulating \"X-ray vision\" in Augmented Reality, a critical aspect is ensuring correct perception of the occluded objects position. Naïve overlay rendering of occluded objects on top of real-world occluders can lead to a misunderstanding of the visual scene and a poor perception of the depth. We present a simple technique to enhance the perception of the spatial arrangements in the scene. An importance mask associated with occluders informs the rendering what information can be overlaid and what should be preserved. This technique is independent of scene properties such as illumination and surface properties, which may be unknown. The proposed solution is computed efficiently in a single-pass fragment shaders on the GPU.",
"title": ""
},
{
"docid": "3b5216dfbd7b12cf282311d645b10a38",
"text": "3D CAD systems are used in product design for simultaneous engineering and to improve productivity. CAD tools can substantially enhance design performance. Although 3D CAD is a widely used and highly effective tool in mechanical design, mastery of CAD skills is complex and time-consuming. The concepts of parametric–associative models and systems are powerful tools whose efficiency is proportional to the complexity of their implementation. The availability of a framework for actions that can be taken to improve CAD efficiency can therefore be highly beneficial. Today, a clear and structured approach does not exist in this way for CAD methodology deployment. The novelty of this work is therefore to propose a general strategy for utilizing the advantages of parametric CAD in the automotive industry in the form of a roadmap. The main stages of the roadmap are illustrated by means of industrial use cases. The first results of his research are discussed and suggestions for future work are given. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0feae39f7e557a65699f686d14f4cf0f",
"text": "This paper describes the design of a multi-gigabit fiber-optic receiver with integrated large-area photo detectors for plastic optical fiber applications. An integrated 250 μm diameter non-SML NW/P-sub photo detector is adopted to allow efficient light coupling. The theory of applying a fully-differential pre-amplifier with a single-ended photo current is also examined and a super-Gm transimpedance amplifier has been proposed to drive a C PD of 14 pF to multi-gigahertz frequency. Both differential and common-mode operations of the proposed super-Gm transimpedance amplifier have been analyzed and a differential noise analysis is performed. A digitally-controlled linear equalizer is proposed to produce a slow-rising-slope frequency response to compensate for the photo detector up to 3 GHz. The proposed POF receiver consists of an illuminated signal photo detector, a shielded dummy photo detector, a super-Gm transimpedance amplifier, a variable-gain amplifier, a linear equalizer, a post amplifier, and an output driver. A test chip is fabricated in TSMC's 65 nm low-power CMOS process, and it consumes 50 mW of DC power (excluding the output driver) from a single 1.2 V supply. A bit-error rate of less than 10-12 has been measured at a data rate of 3.125 Gbps with a 670 nm VCSEL-based electro-optical transmitter.",
"title": ""
},
{
"docid": "fe62e3a9acfe5009966434aa1f39099d",
"text": "Previous studies have found a subgroup of people with autism or Asperger Syndrome who pass second-order tests of theory of mind. However, such tests have a ceiling in developmental terms corresponding to a mental age of about 6 years. It is therefore impossible to say if such individuals are intact or impaired in their theory of mind skills. We report the performance of very high functioning adults with autism or Asperger Syndrome on an adult test of theory of mind ability. The task involved inferring the mental state of a person just from the information in photographs of a person's eyes. Relative to age-matched normal controls and a clinical control group (adults with Tourette Syndrome), the group with autism and Asperger Syndrome were significantly impaired on this task. The autism and Asperger Syndrome sample was also impaired on Happé's strange stories tasks. In contrast, they were unimpaired on two control tasks: recognising gender from the eye region of the face, and recognising basic emotions from the whole face. This provides evidence for subtle mindreading deficits in very high functioning individuals on the autistic continuum.",
"title": ""
},
{
"docid": "528a22ba860fd4ad4da3773ff2b01dcd",
"text": "During the last decade it has become more widely accepted that pet ownership and animal assistance in therapy and education may have a multitude of positive effects on humans. Here, we review the evidence from 69 original studies on human-animal interactions (HAI) which met our inclusion criteria with regard to sample size, peer-review, and standard scientific research design. Among the well-documented effects of HAI in humans of different ages, with and without special medical, or mental health conditions are benefits for: social attention, social behavior, interpersonal interactions, and mood; stress-related parameters such as cortisol, heart rate, and blood pressure; self-reported fear and anxiety; and mental and physical health, especially cardiovascular diseases. Limited evidence exists for positive effects of HAI on: reduction of stress-related parameters such as epinephrine and norepinephrine; improvement of immune system functioning and pain management; increased trustworthiness of and trust toward other persons; reduced aggression; enhanced empathy and improved learning. We propose that the activation of the oxytocin system plays a key role in the majority of these reported psychological and psychophysiological effects of HAI. Oxytocin and HAI effects largely overlap, as documented by research in both, humans and animals, and first studies found that HAI affects the oxytocin system. As a common underlying mechanism, the activation of the oxytocin system does not only provide an explanation, but also allows an integrative view of the different effects of HAI.",
"title": ""
},
{
"docid": "e489bf53271cb75de82cdb5aec5196e6",
"text": "This paper presents the sensitivity optimization of a microwave biosensor dedicated to the analysis of a single living biological cell from 40 MHz to 40 GHz, directly in its culture medium. To enhance the sensor sensitivity, different capacitive gap located in the center of the biosensor, below the cell position, have been evaluated with different beads sizes. The best capacitive and conductive contrasts have been reached for a gap width of 5 μm with beads exhibiting diameters of 10 and 20 μm, due to electromagnetic field penetration in the beads. Contrasts improvement of 40 and 60 % have been achieved with standard deviations in the order of only 4% and 6% for the capacitive and conductive contrasts respectively. This sensor therefore permits to measure single living biological cells directly in their culture medium with capacitive and conductive contrasts of 0.4 fF at 5 GHz and 85 μS at 40 GHz, and associated standard deviations estimated at 7% and 14% respectively.",
"title": ""
}
] |
scidocsrr
|
276ae462415346f57751ced42b212f4c
|
The Soft Robotics Toolkit: Strategies for Overcoming Obstacles to the Wide Dissemination of Soft-Robotic Hardware
|
[
{
"docid": "36e4260c43efca5a67f99e38e5dbbed8",
"text": "The inherent compliance of soft fluidic actuators makes them attractive for use in wearable devices and soft robotics. Their flexible nature permits them to be used without traditional rotational or prismatic joints. Without these joints, however, measuring the motion of the actuators is challenging. Actuator-level sensors could improve the performance of continuum robots and robots with compliant or multi-degree-of-freedom joints. We make the reinforcing braid of a pneumatic artificial muscle (PAM or McKibben muscle) “smart” by weaving it from conductive insulated wires. These wires form a solenoid-like circuit with an inductance that more than doubles over the PAM contraction. The reinforcing and sensing fibers can be used to measure the contraction of a PAM actuator with a simple linear function of the measured inductance, whereas other proposed self-sensing techniques rely on the addition of special elastomers or transducers, the technique presented in this paper can be implemented without modifications of this kind. We present and experimentally validate two models for Smart Braid sensors based on the long solenoid approximation and the Neumann formula, respectively. We test a McKibben muscle made from a Smart Braid in quasi-static conditions with various end loads and in dynamic conditions. We also test the performance of the Smart Braid sensor alongside steel.",
"title": ""
}
] |
[
{
"docid": "fcda27822551a75990ac638e8920ab4d",
"text": "Images and videos becomes one of the principle means of communication these days. Validating the authenticity of the image has been the active research area for last decade. When an image or video is obtained as the evidence it can be used as probative only if it is authentic. Convolution Neural Networks (CNN) have been widely used in automatic image classification, Image Recognition and Identifying image Manipulation. CNN is efficient deep neural network that can study concurrently with the help of large datasets. Recent studies have indicated that the architectures of CNN tailored for identifying manipulated image will provide least efficiency when the image is directly fed into the network. Deep Learning is the branch of machine learning that learns the features by hierarchical representation where higher-level features are defined from lower-level concepts. In this paper, we make use of deep learning known as CNN to classify the manipulated image which is capable of automatically learning traces left by editing of the image by applying the filter that retrieves altered relationship among the pixels of the image and experiments were done in TensorFlow framework. Results showed that manipulations like median filtering, Gaussian blurring, resizing and cut and paste forgery can be detected with an average accuracy of 97%.",
"title": ""
},
{
"docid": "232d7e7986de374499c8ca580d055729",
"text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.",
"title": ""
},
{
"docid": "09b94dbd60ec10aa992d67404f9687e9",
"text": "It is increasingly acknowledged that many threats to an organisation’s computer systems can be attributed to the behaviour of computer users. To quantify these human-based information security vulnerabilities, we are developing the Human Aspects of Information Security Questionnaire (HAIS-Q). The aim of this paper was twofold. The first aim was to outline the conceptual development of the HAIS-Q, including validity and reliability testing. The second aim was to examine the relationship between knowledge of policy and procedures, attitude towards policy and procedures and behaviour when using a work computer. Results from 500 Australian employees indicate that knowledge of policy and procedures had a stronger influence on attitude towards policy and procedure than selfreported behaviour. This finding suggests that training and education will be more effective if it outlines not only what is expected (knowledge) but also provides an understanding of why this is important (attitude). Plans for future research to further develop and test the HAIS-Q are outlined. Crown Copyright a 2014 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "400f6485f06cf2e66afb9a9d5bd19f4d",
"text": "The performance of in-memory based data analytic frameworks such as Spark is significantly affected by how data is partitioned. This is because the partitioning effectively determines task granularity and parallelism. Moreover, different phases of a workload execution can have different optimal partitions. However, in the current implementations, the tuning knobs controlling the partitioning are either configured statically or involve a cumbersome programmatic process for affecting changes at runtime. In this paper, we propose CHOPPER, a system for automatically determining the optimal number of partitions for each phase of a workload and dynamically changing the partition scheme during workload execution. CHOPPER monitors the task execution and DAG scheduling information to determine the optimal level of parallelism. CHOPPER repartitions data as needed to ensure efficient task granularity, avoids data skew, and reduces shuffle traffic. Thus, CHOPPER allows users to write applications without having to hand-tune for optimal parallelism. Experimental results show that CHOPPER effectively improves workload performance by up to 35.2% compared to standard Spark setup.",
"title": ""
},
{
"docid": "597522575f1bc27394da2f1040e9eaa5",
"text": "Many natural language processing systems rely on machine learning models that are trained on large amounts of manually annotated text data. The lack of sufficient amounts of annotated data is, however, a common obstacle for such systems, since manual annotation of text is often expensive and time-consuming. The aim of “PAL, a tool for Pre-annotation and Active Learning” is to provide a ready-made package that can be used to simplify annotation and to reduce the amount of annotated data required to train a machine learning classifier. The package provides support for two techniques that have been shown to be successful in previous studies, namely active learning and pre-annotation. The output of the pre-annotation is provided in the annotation format of the annotation tool BRAT, but PAL is a stand-alone package that can be adapted to other formats.",
"title": ""
},
{
"docid": "cd64e9677f921f6602197ba809d106f4",
"text": "The global pandemic of physical inactivity requires a multisectoral, multidisciplinary public-health response. Scaling up interventions that are capable of increasing levels of physical activity in populations across the varying cultural, geographic, social, and economic contexts worldwide is challenging, but feasible. In this paper, we review the factors that could help to achieve this. We use a mixed-methods approach to comprehensively examine these factors, drawing on the best available evidence from both evidence-to-practice and practice-to-evidence methods. Policies to support active living across society are needed, particularly outside the health-care sector, as demonstrated by some of the successful examples of scale up identified in this paper. Researchers, research funders, and practitioners and policymakers in culture, education, health, leisure, planning, and transport, and civil society as a whole, all have a role. We should embrace the challenge of taking action to a higher level, aligning physical activity and health objectives with broader social, environmental, and sustainable development goals.",
"title": ""
},
{
"docid": "92d3bb6142eafc9dc9f82ce6a766941a",
"text": "The classical Rough Set Theory (RST) always generates too many rules, making it difficult for decision makers to choose a suitable rule. In this study, we use two processes (pre process and post process) to select suitable rules and to explore the relationship among attributes. In pre process, we propose a pruning process to select suitable rules by setting up a threshold on the support object of decision rules, to thereby solve the problem of too many rules. The post process used the formal concept analysis from these suitable rules to explore the attribute relationship and the most important factors affecting decision making for choosing behaviours of personal investment portfolios. In this study, we explored the main concepts (characteristics) for the conservative portfolio: the stable job, less than 4 working years, and the gender is male; the moderate portfolio: high school education, the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), the gender is male; and the aggressive portfolio: the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), less than 4 working years, and a stable job. The study result successfully explored the most important factors affecting the personal investment portfolios and the suitable rules that can help decision makers. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0c805b994e89c878a62f2e1066b0a8e7",
"text": "3D spatial data modeling is one of the key research problems in 3D GIS. More and more applications depend on these 3D spatial data. Mostly, these data are stored in Geo-DBMSs. However, recent Geo-DBMSs do not support 3D primitives modeling, it only able to describe a single-attribute of the third-dimension, i.e. modeling 2.5D datasets that used 2D primitives (plus a single z-coordinate) such as polygons in 3D space. This research focuses on 3D topological model based on space partition for 3D GIS, for instance, 3D polygons or tetrahedron form a solid3D object. Firstly, this report discusses formal definitions of 3D spatial objects, and then all the properties of each object primitives will be elaborated in detailed. The author also discusses methods for constructing the topological properties to support object semantics is introduced. The formal framework to describe the spatial model, database using Oracle Spatial is also given in this report. All related topological structures that forms the object features are discussed in detail. All related features are tested using real 3D spatial dataset of 3D building. Finally, the report concludes the experiment via visualization of using AutoDesk Map 3D.",
"title": ""
},
{
"docid": "4f52077553ebd94ed6ce9ff2120dfe9d",
"text": "A new type of deep neural networks (DNNs) is presented in this paper. Traditional DNNs use the multinomial logistic regression (softmax activation) at the top layer for classification. The new DNN instead uses a support vector machine (SVM) at the top layer. Two training algorithms are proposed at the frame and sequence-level to learn parameters of SVM and DNN in the maximum-margin criteria. In the frame-level training, the new model is shown to be related to the multiclass SVM with DNN features; In the sequence-level training, it is related to the structured SVM with DNN features and HMM state transition features. Its decoding process is similar to the DNN-HMM hybrid system but with frame-level posterior probabilities replaced by scores from the SVM. We term the new model deep neural support vector machine (DNSVM). We have verified its effectiveness on the TIMIT task for continuous speech recognition.",
"title": ""
},
{
"docid": "342c39b533e6a94edd72530ca3d57a54",
"text": "Graph-embedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this framework, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local geometric properties within each class are preserved according to the locally linear embedding (LLE) criterion, and the separability between different classes is enforced by maximizing margins between point pairs on different classes. To deal with the out-of-sample problem in visual recognition with vector input, the linear version of DLLE, i.e., linearization of DLLE (DLLE/L), is directly proposed through the graph-embedding framework. Moreover, we propose its multilinear version, i.e., tensorization of DLLE, for the out-of-sample problem with high-order tensor input. Based on DLLE, a procedure for gait recognition is described. We conduct comprehensive experiments on both gait and face recognition, and observe that: 1) DLLE along its linearization and tensorization outperforms the related versions of linear discriminant analysis, and DLLE/L demonstrates greater effectiveness than the linearization of LLE; 2) algorithms based on tensor representations are generally superior to linear algorithms when dealing with intrinsically high-order data; and 3) for human gait recognition, DLLE/L generally obtains higher accuracy than state-of-the-art gait recognition algorithms on the standard University of South Florida gait database.",
"title": ""
},
{
"docid": "dc259f1208eac95817d067b9cd13fa7c",
"text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "10f5ad322eeee68e57b66dd9f2bfe25b",
"text": "Irmin is an OCaml library to design purely functional data structures that can be persisted on disk and be merged and synchronized efficiently. In this paper, we focus on the merge aspect of the library and present two data structures built on top of Irmin: (i) queues and (ii) ropes that extend the corresponding purely functional data structures with a 3-way merge operation. We provide early theoretical and practical complexity results for these new data structures. Irmin is available as open-source code as part of the MirageOS project.",
"title": ""
},
{
"docid": "72d51fd4b384f4a9c3f6fe70606ab120",
"text": "Cloud Computing is a flexible, cost-effective, and proven delivery platform for providing business or consumer IT services over the Internet. However, cloud Computing presents an added level of risk because essential services are often outsourced to a third party, which makes it harder to maintain data security and privacy, support data and service availability, and demonstrate compliance. Cloud Computing leverages many technologies (SOA, virtualization, Web 2.0); it also inherits their security issues, which we discuss here, identifying the main vulnerabilities in this kind of systems and the most important threats found in the literature related to Cloud Computing and its environment as well as to identify and relate vulnerabilities and threats with possible solutions.",
"title": ""
},
{
"docid": "e5bbf88eedf547551d28a731bd4ebed7",
"text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.",
"title": ""
},
{
"docid": "fbcf0375db0e665c1028f8db77ccdc34",
"text": "Design fiction is an emergent field within HCI and interaction design the understanding of which ultimately relies, so we argue, of an integrative account of poetics and design praxis. In this paper we give such an account. Initially, a precise definition of design fiction is given by drawing on the theory of possible worlds found within poetics. Further, we offer a method of practicing design fiction, which relies on the equal integration of literary practice with design practice. The use of this method is demonstrated by 4 design projects from a workshop set up in collaboration with a Danish author. All of this substantiates our notion of a poetics of practicing design fiction, and through our critical examination of related work we conclude on how our approach contribute to HCI and interaction design.",
"title": ""
},
{
"docid": "058a89e44689faa0a2545b5b75fd8cb9",
"text": "cplint on SWISH is a web application that allows users to perform reasoning tasks on probabilistic logic programs. Both inference and learning systems can be performed: conditional probabilities with exact, rejection sampling and Metropolis-Hasting methods. Moreover, the system now allows hybrid programs, i.e., programs where some of the random variables are continuous. To perform inference on such programs likelihood weighting and particle filtering are used. cplint on SWISH is also able to sample goals’ arguments and to graph the results. This paper reports on advances and new features of cplint on SWISH, including the capability of drawing the binary decision diagrams created during the inference processes.",
"title": ""
},
{
"docid": "7fd7af08666f3cfad0c2dc975427c7f2",
"text": "Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption. We propose BlindBox, the first system that simultaneously provides {\\em both} of these properties. The approach of BlindBox is to perform the deep-packet inspection {\\em directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes.\n We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.",
"title": ""
},
{
"docid": "468e87d87a693dfa470ab92b3085cbe6",
"text": "User feedback in the form of movie-watching history, item ratings, or product consumption is very helpful in training recommender systems. However, relatively few interactions between items and users can be observed. Instances of missing user--item entries are caused by the user not seeing the item (although the actual preference to the item could still be positive) or the user seeing the item but not liking it. Separating these two cases enables missing interactions to be modeled with finer granularity, and thus reflects user preferences more accurately. However, most previous studies on the modeling of missing instances have not fully considered the case where the user has not seen the item. Social connections are known to be helpful for modeling users' potential preferences more extensively, although a similar visibility problem exists in accurately identifying social relationships. That is, when two users are unaware of each other's existence, they have no opportunity to connect. In this paper, we propose a novel user preference model for recommender systems that considers the visibility of both items and social relationships. Furthermore, the two kinds of information are coordinated in a unified model inspired by the idea of transfer learning. Extensive experiments have been conducted on three real-world datasets in comparison with five state-of-the-art approaches. The encouraging performance of the proposed system verifies the effectiveness of social knowledge transfer and the modeling of both item and social visibilities.",
"title": ""
}
] |
scidocsrr
|
ae8582de6989590fe3f6953e1bec1a96
|
Manifold Learning by Curved Cosine Mapping
|
[
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "02aed3ad7a5a4a70cfb3f9f4923e3a34",
"text": "Social media platforms such as Facebook are now a ubiquitous part of everyday life for many people. New media scholars posit that the participatory culture encouraged by social media gives rise to new forms of literacy skills that are vital to learning. However, there have been few attempts to use analytics to understand the new media literacy skills that may be embedded in an individual's participation in social media. In this paper, I collect raw activity data that was shared by an exploratory sample of Facebook users. I then utilize factor analysis and regression models to show how (a) Facebook members' online activity coalesce into distinct categories of social media behavior and (b) how these participatory behaviors correlate with and predict measures of new media literacy skills. The study demonstrates the use of analytics to understand the literacies embedded in people's social media activity. The implications speak to the potential of social learning analytics to identify and predict new media literacy skills from data streams in social media platforms.",
"title": ""
},
{
"docid": "2a26741788140152a074c1b37fd08be1",
"text": "The bag-of-words representation of text data is very popular for document classification. In the recent literature, it has been shown that properly weighting the term feature vector can improve the classification performance significantly beyond the original term-frequency based features. In this paper we demystify the success of the recent term-weighting strategies as well as provide possibly more reasonable modifications. We then propose novel term-weighting schemes that can be induced from the well-known document probabilistic models such as the Naive Bayes and the multinomial term model. Interestingly, some of the intuition-based term-weighting schemes coincide exactly with the proposed derivations. Our term-weighting schemes are tested on large-scale text classification problems/datasets where we demonstrate improved prediction performance over existing approaches.",
"title": ""
},
{
"docid": "8ef33ec42fa7552de5487fefea903c6f",
"text": "This study investigated verbal working memory capacity in children with specific language impairment (SLI). The task employed in this study was the Competing Language Processing Task (CLPT) developed by Gaulin and Campbell (1994). A total of 40 school-age children participated in this investigation, including 20 with SLI and 20 normal language (NL) age-matched controls. Results indicated that the SLI and NL groups performed similarly in terms of true/false comprehension items, but that the children with SLI evidenced significantly poorer word recall than the NL controls, even when differences in nonverbal cognitive scores were statistically controlled. Distinct patterns of word-recall errors were observed for the SLI and NL groups, as well as different patterns of associations between CLPT word recall and performance on nonverbal cognitive and language measures. The findings are interpreted within the framework of a limited-capacity model of language processing.",
"title": ""
},
{
"docid": "ef57140e433ad175a3fae38236effa69",
"text": "For a real driver assistance system, the weather, driving speed, and background could affect the accuracy of obstacle detection. In the past, only a few studies covered all the different weather conditions and almost none of them had paid attention to the safety at vehicle lateral blind spot area. So, this paper proposes a hybrid scheme for pedestrian and vehicle detection, and develop a warning system dedicated for lateral blind spot area under different weather conditions and driving speeds. More specifically, the HOG and SVM methods are used for pedestrian detection. The image subtraction, edge detection and tire detection are applied for vehicle detection. Experimental results also show that the proposed system can efficiently detect pedestrian and vehicle under several scenarios.",
"title": ""
},
{
"docid": "6fa454fc02b5f52e08e6ab0de657ed6b",
"text": "Large numbers of children in the world are acquiring one language as their native language and subsequently learning another. There are also many children who are acquiring two or more languages simultaneously in early childhood as part of the natural consequences of being a member of bilingual families and communities. Because bilingualism brings about advantages to children that have an effect on their future development, understanding differences between monolinguals and bilinguals becomes a question of interest. However, on tests of vocabulary bilinguals frequently seem to perform at lower levels than monolinguals (Ben Zeev, 1977b; Doyle, Champagne, & Segalowitz, 1978). The reason for this seems to be that bilingual children have to learn two different labels for everything, which reduces the frequency of a particular word in either language (Ben Zeev, 1977b). This makes the task of acquiring, sorting, and differentiating vocabulary and meaning in two languages much more difficult when compared to the monolingual child’s task in one language (Doyle et al., 1978). Many researchers (Genesee & Nicoladis, 1995; Patterson, 1998; Pearson, Fernandez, and Oller, 1993) have raised questions about the appropriateness of using monolingual vocabulary norms to evaluate bilinguals. In the past, when comparing monolingual and bilingual performance, researchers mainly considered only one language of the bilingual (Ben Zeev, 1977b; Bialystok, 1988; Doyle et al., 1978). However, there is considerable evidence of a vocabulary overlap in the lexicon of bilingual children’s two languages, differing from child to child (Umbel, Pearson, Fernandez, and Oller, 1992). This vocabulary overlap is attributed to the child acquiring each language in different contexts resulting in some areas of complementary knowledge across the two languages (Saunders, 1982). It is crucial to examine both languages of bilingual children and account for this overlap in order to assess the size of bilinguals’ vocabulary with validity. This has been very difficult to do, since there are a few standardized measures for vocabulary knowledge in two languages concurrently and no measure are normed for bilingual preschool age children. It has been suggested that when the vocabulary scores of tests in both languages of the bilingual child are combined, their vocabulary equals or exceeds that of monolingual children (Bialystok, 1988; Doyle et al., 1978; Genesee & Nicoladis, 1995). However, this measure of Total Vocabulary (total scores achieved in language A + language B) is not sufficient for the examination of differences in vocabulary size of bilinguals and monolinguals due to the vocabulary overlap. A measure of total unique words or Conceptual Vocabulary, which is a combination of vocabulary scores in both languages considering words describing the same concept as one word, provides additional information about bilinguals’ vocabulary size with regards to knowledge of concepts. Pearson et al. (1993) conducted the only study considering both Total Vocabulary (language A + language B) and Conceptual Vocabulary (language A U language B) for bilingual children in comparison to their monolingual peers. Based on a sample of 25 simultaneous English/Spanish bilinguals and 35 monolinguals it was suggested that there exists no basis for concluding that the bilingual children were slower to develop early vocabulary than were their monolingual peers. There is a possibility that quite the opposite is true with regards to vocabulary comprehension when both languages are involved. There is a need for further study evaluating vocabulary size of preschool bilinguals to verify patterns identified by Pearson et al. (1993).",
"title": ""
},
{
"docid": "983783ec0d3ed9ac993b4e129c0e2cc6",
"text": "We propose a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the MSTAR public release database. First, each MSTAR image chip is pre-processed by extracting fine and raw feature sets, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, we decompose our multiclass problem into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then “decoded” as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature.",
"title": ""
},
{
"docid": "cae661146bc0156af25d8014cb61ef0b",
"text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.",
"title": ""
},
{
"docid": "6a128aa00edaf147df327e7736eeb4c9",
"text": "Query segmentation is essential to query processing. It aims to tokenize query words into several semantic segments and help the search engine to improve the precision of retrieval. In this paper, we present a novel unsupervised learning approach to query segmentation based on principal eigenspace similarity of queryword-frequency matrix derived from web statistics. Experimental results show that our approach could achieve superior performance of 35.8% and 17.7% in Fmeasure over the two baselines respectively, i.e. MI (Mutual Information) approach and EM optimization approach.",
"title": ""
},
{
"docid": "f7e19e14c90490e1323e47860d21ec4d",
"text": "There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery — including DNA-sequencing technologies and analysis algorithms — need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision.",
"title": ""
},
{
"docid": "bc6a6cf11881326360387cbed997dcf1",
"text": "The explanation of heterogeneous multivariate time series data is a central problem in many applications. The problem requires two major data mining challenges to be addressed simultaneously: Learning models that are humaninterpretable and mining of heterogeneous multivariate time series data. The intersection of these two areas is not adequately explored in the existing literature. To address this gap, we propose grammar-based decision trees and an algorithm for learning them. Grammar-based decision tree extends decision trees with a grammar framework. Logical expressions, derived from context-free grammar, are used for branching in place of simple thresholds on attributes. The added expressivity enables support for a wide range of data types while retaining the interpretability of decision trees. By choosing a grammar based on temporal logic, we show that grammar-based decision trees can be used for the interpretable classification of high-dimensional and heterogeneous time series data. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to analyze the classic Australian Sign Language dataset as well as categorize and explain near midair collisions to support the development of a prototype aircraft collision avoidance system.",
"title": ""
},
{
"docid": "667708c33b11b40515385c32ee4d99b2",
"text": "This review paper explores considerations for ultimate CMOS transistor scaling. Transistor architectures such as extremely thin silicon-on-insulator and FinFET (and related architectures such as TriGate, Omega-FET, Pi-Gate), as well as nanowire device architectures, are compared and contrasted. Key technology challenges (such as advanced gate stacks, mobility, resistance, and capacitance) shared by all of the architectures will be discussed in relation to recent research results.",
"title": ""
},
{
"docid": "9966de2dc11d909febf8a7120107d38f",
"text": "The problem of Automatic Fingerprint Pattern Classification (AFPC) has been studied by many fingerprint biometric practitioners. It is an important concept because, in instances where a relatively large database is being queried for the purposes of fingerprint matching, it serves to reduce the duration of the query. The fingerprint classes discussed in this document are the Central Twins (CT), Tented Arch (TA), Left Loop (LL), Right Loop (RL) and the Plain Arch (PA). The classification rules employed in this problem involve the use of the coordinate geometry of the detected singular points. Using a confusion matrix to evaluate the performance of the fingerprint classifier, a classification accuracy of 83.5% is obtained on the five-class problem. This performance evaluation is done by making use of fingerprint images from one of the databases of the year 2002 version of the Fingerprint Verification Competition (FVC2002).",
"title": ""
},
{
"docid": "893a8c073b8bd935fbea419c0f3e0b17",
"text": "Computing as a service model in cloud has encouraged High Performance Computing to reach out to wider scientific and industrial community. Many small and medium scale HPC users are exploring Infrastructure cloud as a possible platform to run their applications. However, there are gaps between the characteristic traits of an HPC application and existing cloud scheduling algorithms. In this paper, we propose an HPC-aware scheduler and implement it atop Open Stack scheduler. In particular, we introduce topology awareness and consideration for homogeneity while allocating VMs. We demonstrate the benefits of these techniques by evaluating them on a cloud setup on Open Cirrus test-bed.",
"title": ""
},
{
"docid": "b2e62194ce1eb63e0d13659a546db84b",
"text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.",
"title": ""
},
{
"docid": "3ed5812763fbef60754534ca50e16de7",
"text": "Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization.",
"title": ""
},
{
"docid": "48fbfd8185181edda9d7333e377dbd37",
"text": "This paper proposes the novel Pose Guided Person Generation Network (PG) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128×64 re-identification images and 256×256 fashion photos show that our model generates high-quality person images with convincing details.",
"title": ""
},
{
"docid": "59a8fb8f04e73be3bd56a146a700f15f",
"text": "OBJECTIVE\nWe created a system using a triad of change management, electronic surveillance, and algorithms to detect sepsis and deliver highly sensitive and specific decision support to the point of care using a mobile application. The investigators hypothesized that this system would result in a reduction in sepsis mortality.\n\n\nMETHODS\nA before-and-after model was used to study the impact of the interventions on sepsis-related mortality. All patients admitted to the study units were screened per the Institute for Healthcare Improvement Surviving Sepsis Guidelines using real-time electronic surveillance. Sepsis surveillance algorithms that adjusted clinical parameters based on comorbid medical conditions were deployed for improved sensitivity and specificity. Nurses received mobile alerts for all positive sepsis screenings as well as severe sepsis and shock alerts over a period of 10 months. Advice was given for early goal-directed therapy. Sepsis mortality during a control period from January 1, 2011 to September 30, 2013 was used as baseline for comparison.\n\n\nRESULTS\nThe primary outcome, sepsis mortality, decreased by 53% (P = 0.03; 95% CI, 1.06-5.25). The 30-day readmission rate reduced from 19.08% during the control period to 13.21% during the study period (P = 0.05; 95% CI, 0.97-2.52). No significant change in length of hospital stay was noted. The system had observed sensitivity of 95% and specificity of 82% for detecting sepsis compared to gold-standard physician chart review.\n\n\nCONCLUSION\nA program consisting of change management and electronic surveillance with highly sensitive and specific decision support delivered to the point of care resulted in significant reduction in deaths from sepsis.",
"title": ""
},
{
"docid": "1d9e03eb11328f96eaee1f70dcf2a539",
"text": "Crossbar architecture has been widely adopted in neural network accelerators due to the efficient implementations on vector-matrix multiplication operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically because of the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied method to shrink the model size, whereas prior work for CNNs compression rarely considered the crossbar architecture and the corresponding mapping method and cannot be directly utilized by crossbar-based neural network accelerators. This paper proposes a crossbar-aware pruning framework based on a formulated $L_{0}$ -norm constrained optimization problem. Specifically, we design an $L_{0}$ -norm constrained gradient descent with relaxant probabilistic projection to solve this problem. Two types of sparsity are successfully achieved: 1) intuitive crossbar-grain sparsity and 2) column-grain sparsity with output recombination, based on which we further propose an input feature maps reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on the median-scale CIFAR10 data set and the large-scale ImageNet data set with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%–72% with insignificant accuracy degradation. This paper significantly reduce the resource overhead and the related energy cost and provides a new co-design solution for mapping CNNs onto various crossbar devices with much better efficiency.",
"title": ""
},
{
"docid": "2d5dba872d7cd78a9e2d57a494a189ea",
"text": "In this chapter, we give an overview of what ontologies are and how they can be used. We discuss the impact of the expressiveness, the number of domain elements, the community size, the conceptual dynamics, and other variables on the feasibility of an ontology project. Then, we break down the general promise of ontologies of facilitating the exchange and usage of knowledge to six distinct technical advancements that ontologies actually provide, and discuss how this should influence design choices in ontology projects. Finally, we summarize the main challenges of ontology management in real-world applications, and explain which expectations from practitioners can be met as",
"title": ""
},
{
"docid": "addda27c4c11c7160c2c451d3799d97f",
"text": "The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors' knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.",
"title": ""
}
] |
scidocsrr
|
5fd474cd64c5c8561fb698de4dfdf611
|
Ambidextrous leadership , ambidextrous employee , and the interaction between ambidextrous leadership and employee innovative performance
|
[
{
"docid": "e3cf5d85d6a9077536dfbb3aaab38c41",
"text": "Organizational ambidexterity has emerged as a new research paradigm in organization theory, yet several issues fundamental to this debate remain controversial. We explore four central tensions here: Should organizations achieve ambidexterity through differentiation or through integration? Does ambidexterity occur at the individual or organizational level? Must organizations take a static or dynamic perspective on ambidexterity? Finally, can ambidexterity arise internally, or do firms have to externalize some processes? We provide an overview of the seven articles included in this special issue and suggest several avenues for future research.",
"title": ""
}
] |
[
{
"docid": "e146526fbd2561d1dac33ab82470efae",
"text": "Using daily returns of the S&P500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study applies extreme value techniques in portfolio management on a large scale. We compare the performance of this strategy with the Markowitz approach and investigate how the ERI method can be applied most effectively. Our results show that the annualized return of the ERI strategy is particularly high for assets with heavy tails. The comparison also includes maximal drawdown, transaction costs, portfolio concentration, and asset diversity in the portfolio. In addition to that we study the impact of an alternative tail index estimator.",
"title": ""
},
{
"docid": "d716725f2a5d28667a0746b31669bbb7",
"text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"title": ""
},
{
"docid": "72a1798a864b4514d954e1e9b6089ad8",
"text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.",
"title": ""
},
{
"docid": "ee20233660c2caa4a24dbfb512172277",
"text": "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections.",
"title": ""
},
{
"docid": "e7c816d150e0829c926a9c26d62449ab",
"text": "In this paper, we consider multihop wireless mesh networks, where each router node is equipped with multiple radio interfaces, and multiple channels are available for communication. We address the problem of assigning channels to communication links in the network with the objective of minimizing the overall network interference. Since the number of radios on any node can be less than the number of available channels, the channel assignment must obey the constraint that the number of different channels assigned to the links incident on any node is at most the number of radio interfaces on that node. The above optimization problem is known to be NP-hard. We design centralized and distributed algorithms for the above channel assignment problem. To evaluate the quality of the solutions obtained by our algorithms, we develop a semidefinite program and a linear program formulation of our optimization problem to obtain lower bounds on overall network interference. Empirical evaluations on randomly generated network graphs show that our algorithms perform close to the above established lower bounds, with the difference diminishing rapidly with increase in number of radios. Also, ns-2 simulations, as well as experimental studies on testbed, demonstrate the performance potential of our channel assignment algorithms in 802.11-based multiradio mesh networks.",
"title": ""
},
{
"docid": "21472ce2bf66d84a8fce106832e0fe97",
"text": "Every time you go to one of the top 100 book/music e-commerce sites, you will come into contact with personalisation systems that attempt to judge your interests to increase sales. There are 3 methods for making these personalised recommendations: Content-based filtering, Collaborative filtering and a hybrid of the two. Understanding each of these methods will give you insight as to how your personal information is used on the Internet, and remove some of the mystery associated with the systems. This will allow you understand how these systems work and how they could be improved, so as to make an informed decision as to whether this is a good thing.",
"title": ""
},
{
"docid": "d558db90f72342eae413ed7937e9120f",
"text": "Latent Dirichlet Allocation (LDA) models trained without stopword removal often produce topics with high posterior probabilities on uninformative words, obscuring the underlying corpus content. Even when canonical stopwords are manually removed, uninformative words common in that corpus will still dominate the most probable words in a topic. In this work, we first show how the standard topic quality measures of coherence and pointwise mutual information act counter-intuitively in the presence of common but irrelevant words, making it difficult to even quantitatively identify situations in which topics may be dominated by stopwords. We propose an additional topic quality metric that targets the stopword problem, and show that it, unlike the standard measures, correctly correlates with human judgements of quality. We also propose a simple-to-implement strategy for generating topics that are evaluated to be of much higher quality by both human assessment and our new metric. This approach, a collection of informative priors easily introduced into most LDA-style inference methods, automatically promotes terms with domain relevance and demotes domain-specific stop words. We demonstrate this approach’s effectiveness in three very different domains: Department of Labor accident reports, online health forum posts, and NIPS abstracts. Overall we find that current practices thought to solve this problem do not do so adequately, and that our proposal offers a substantial improvement for those interested in interpreting their topics as objects in their own right.",
"title": ""
},
{
"docid": "33073b54a55db722c363fe05b9c4242c",
"text": "We propose a new class of distributions called the Lomax generator with two extra positive parameters to generalize any continuous baseline distribution. Some special models such as the Lomax-normal, Lomax–Weibull, Lomax-log-logistic and Lomax–Pareto distributions are discussed. Some mathematical properties of the new generator including ordinary and incomplete moments, quantile and generating functions, mean and median deviations, distribution of the order statistics and some entropy measures are presented. We discuss the estimation of the model parameters by maximum likelihood. We propose a minification process based on the marginal Lomax-exponential distribution. We define a logLomax–Weibull regression model for censored data. The importance of the new generator is illustrated by means of three real data sets. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "2af36afd2440a4940873fef1703aab3f",
"text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.",
"title": ""
},
{
"docid": "35a298d5ec169832c3faf2e30d95e1a4",
"text": "© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i. m i t. e d u",
"title": ""
},
{
"docid": "c58aaa7e1b197a1ee95fb343b0de8664",
"text": "Natural language understanding (NLU) is an important module of spoken dialogue systems. One of the difficulties when it comes to adapting NLU to new domains is the high cost of constructing new training data for each domain. To reduce this cost, we propose a zero-shot learning of NLU that takes into account the sequential structures of sentences together with general question types across different domains. Experimental results show that our methods achieve higher accuracy than baseline methods in two completely different domains (insurance and sightseeing).",
"title": ""
},
{
"docid": "e106afaefd5e61f4a5787a7ae0c92934",
"text": "Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen. It is a useful technique in cases where an important class of data is under-represented in the training set. This means that the performance of the network will be poor for those classes. In some circumstances, such as medical data and fault detection, it is often precisely the class that is under-represented in the data, the disease or potential fault, that the network should detect. In novelty detection systems the network is trained only on the negative examples where that class is not present, and then detects inputs that do not fits into the model that it has acquired, that it, members of the novel class. This paper reviews the literature on novelty detection in neural networks and other machine learning techniques, as well as providing brief overviews of the related topics of statistical outlier detection and novelty detection in biological organisms.",
"title": ""
},
{
"docid": "d879e53880baeb2da303179195731b03",
"text": "Semantic search has been one of the motivations of the semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of information retrieval on the semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search",
"title": ""
},
{
"docid": "534312b8aa312c871d127aa1e3c019d9",
"text": "Seekers of information in libraries either go through a librarian intermediary or they help themselves. When they go through librarians they must develop their questions through four levels of need, referred to here as the visceral, conscious, formalized, and compromised needs. In his pre-search interview with an information-seeker the reference librarian attempts to help him arrive at an understanding of his \"compromised\" need by determining: (1) the subject of his interest; (2) his motivation; (3) his personal characteristics; (4) the relationship of the inquiry to file organization; and (5) anticipated answers. The author contends that research is needed into the techniques of conducting this negotiation between the user and the reference librarian.",
"title": ""
},
{
"docid": "97e5f2e774b58f7533242114e5e06159",
"text": "We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.",
"title": ""
},
{
"docid": "19b041beb43aadfbde514dc5bb7f7da5",
"text": "The European Train Control System (ETCS) is the leading signaling system for train command and control. In the future, ETCS may be delivered over long-term evolution (LTE) networks. Thus, LTE performance offered to ETCS must be analyzed and confronted with the railway safety requirements. It is especially important to ensure the integrity of the ETCS data, i.e., to protect ETCS data against loss and corruption. In this article, various retransmission mechanisms are considered for providing end-to-end ETCS data integrity in LTE. These mechanisms are validated in simulations, which model worst-case conditions regarding train locations, traffic load, and base-station density. The simulation results show that ETCS data integrity requirements can be fulfilled even under these unfavorable conditions with the proper LTE mechanisms.",
"title": ""
},
{
"docid": "7ced14fb638a63042d405f4ad6f65a4d",
"text": "We present <italic>smart drill-down</italic>, an operator for interactively exploring a relational table to discover and summarize “interesting” groups of tuples. Each group of tuples is described by a <italic>rule</italic> . For instance, the rule <inline-formula><tex-math notation=\"LaTeX\">$(a, b, \\star, 1000)$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq1-2685998.gif\"/></alternatives></inline-formula> tells us that there are 1,000 tuples with value <inline-formula><tex-math notation=\"LaTeX\">$a$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq2-2685998.gif\"/></alternatives></inline-formula> in the first column and <inline-formula><tex-math notation=\"LaTeX\">$b$</tex-math><alternatives> <inline-graphic xlink:href=\"joglekar-ieq3-2685998.gif\"/></alternatives></inline-formula> in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are <sc>NP-Hard</sc>, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms.",
"title": ""
},
{
"docid": "574259df6c01fd0c46160b3f8548e4e7",
"text": "Hashtag has emerged as a widely used concept of popular culture and campaigns, but its implications on people’s privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user’s precise location from hashtags with accuracy of 70% to 76%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings.",
"title": ""
},
{
"docid": "1123b7c561945627289923a0ad9df53e",
"text": "Concluding his provocative 1989 essay delineating how Charlotte Perkins Gilman’s “The Yellow Wallpaper” functions as a Gothic allegory, Greg Johnson describes Gilman’s achievement as yet awaiting its “due recognition” and her compelling short story as being “[s]till under-read, still haunting the margins of the American literary canon” (530). Working from the premise that Gilman’s tale “adroitly and at times parodically employs Gothic conventions to present an allegory of literary imagination unbinding the social, domestic, and psychological confinements of a nineteenth-century woman writer,” Johnson provides a fairly satisfactory general overview of “The Yellow Wallpaper” as a Gothic production (522). Despite the disputable claim that Gilman’s story functions, in part, as a Gothic parody, he correctly identifies and aptly elucidates several of the most familiar Gothic themes at work in this study— specifically “confinement and rebellion, forbidden desire and ‘irrational’ fear”—alongside such traditional Gothic elements as “the distraught heroine, the forbidding mansion, and the powerfully repressive male antagonist” (522). Johnson ultimately overlooks,",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
}
] |
scidocsrr
|
159e579f88219b6d44608230382acebc
|
A critical assessment of imbalanced class distribution problem: The case of predicting freshmen student attrition
|
[
{
"docid": "1ac4ac9b112c2554db37de2070d7c2df",
"text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.",
"title": ""
},
{
"docid": "4eda5bc4f8fa55ae55c69f4233858fc7",
"text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "ba93902813caa2fc8cddfbaa5f8b4917",
"text": "This paper proposes a technique to utilize the power of chatterbots to serve as interactive Support systems to enterprise applications which aim to address a huge audience. The need for support systems arises due to inability of computer illiterate audience to utilize the services offered by an enterprise application. Setting up customer support centers works well for small-medium sized businesses but for mass applications (here E-Governance Systems) the audience counts almost all a country has as its population, Setting up support center that can afford such load is irrelevant. This paper proposes a solution by using AIML based chatterbots to implement Artificial Support Entity (ASE) to such Applications.",
"title": ""
},
{
"docid": "803190e1d9f58a233b573cb842ff4204",
"text": "We designed and tested attractors for computer security dialogs: user-interface modifications used to draw users' attention to the most important information for making decisions. Some of these modifications were purely visual, while others temporarily inhibited potentially-dangerous behaviors to redirect users' attention to salient information. We conducted three between-subjects experiments to test the effectiveness of the attractors.\n In the first two experiments, we sent participants to perform a task on what appeared to be a third-party site that required installation of a browser plugin. We presented them with what appeared to be an installation dialog from their operating system. Participants who saw dialogs that employed inhibitive attractors were significantly less likely than those in the control group to ignore clues that installing this software might be harmful.\n In the third experiment, we attempted to habituate participants to dialogs that they knew were part of the experiment. We used attractors to highlight a field that was of no value during habituation trials and contained critical information after the habituation period. Participants exposed to inhibitive attractors were two to three times more likely to make an informed decision than those in the control condition.",
"title": ""
},
{
"docid": "bd3374fefa94fbb11d344d651c0f55bc",
"text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method",
"title": ""
},
{
"docid": "fd5f48aebc8fba354137dadb445846bc",
"text": "BACKGROUND\nThe syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research.\n\n\nMETHODS\nWe conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity.\n\n\nRESULTS\nThe Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings.\n\n\nCONCLUSIONS\nThe ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement.",
"title": ""
},
{
"docid": "5d8aaba4da6c6aebf08d241484451ea8",
"text": "The lack of a friendly and flexible operational model of landside operations motivated the creation of a new simulation model adaptable to various airport configurations for estimating the time behavior of passenger and baggage flows, the elements’ capacities and the delays in a generic airport terminal. The validation of the model has been conducted by comparison with the results of previous research about the average behavior of the future Athens airport. In the mean time the proposed model provided interesting dynamical results about both passenger and baggage movements in the system.",
"title": ""
},
{
"docid": "59718c2e471dfaf0fb7463a89312813a",
"text": "Many large Internet websites are accessed by users anonymously, without requiring registration or logging-in. However, to provide personalized service these sites build anonymous, yet persistent, user models based on repeated user visits. Cookies, issued when a web browser first visits a site, are typically employed to anonymously associate a website visit with a distinct user (web browser). However, users may reset cookies, making such association short-lived and noisy. In this paper we propose a solution to the cookie churn problem: a novel algorithm for grouping similar cookies into clusters that are more persistent than individual cookies. Such clustering could potentially allow more robust estimation of the number of unique visitors of the site over a certain long time period, and also better user modeling which is key to plenty of web applications such as advertising and recommender systems.\n We present a novel method to cluster browser cookies into groups that are likely to belong to the same browser based on a statistical model of browser visitation patterns. We address each step of the clustering as a binary classification problem estimating the probability that two different subsets of cookies belong to the same browser. We observe that our clustering problem is a generalized interval graph coloring problem, and propose a greedy heuristic algorithm for solving it. The scalability of this method allows us to cluster hundreds of millions of browser cookies and provides significant improvements over baselines such as constrained K-means.",
"title": ""
},
{
"docid": "b7d96b6334c1aab6d7496731aaea820e",
"text": "Dialogue intent analysis plays an important role for dialogue systems. In this paper,we present a deep hierarchical LSTM model to classify the intent of a dialogue utterance. The model is able to recognize and classify user’s dialogue intent in an efficient way. Moreover, we introduce a memory module to the hierarchical LSTM model, so that our model can utilize more context information to perform classification. We evaluate the two proposed models on a real-world conversational dataset from a Chinese famous e-commerce service. The experimental results show that our proposed model outperforms the baselines.",
"title": ""
},
{
"docid": "4d85bf20a514de0181fb33815d833c55",
"text": "STATEMENT OF PROBLEM\nDespite the increasing demand for a digital workflow in the fabrication of indirect restorations, information on the accuracy of the resulting definitive casts is limited.\n\n\nPURPOSE\nThe purpose of this in vitro study was to compare the accuracy of definitive casts produced with digital scans and conventional impressions.\n\n\nMATERIAL AND METHODS\nChamfer preparations were made on the maxillary right canine and second molar of a typodont. Subsequently, 9 conventional impressions were made to produce 9 gypsum casts, and 9 digital scans were made to produce stereolithography additive (SLA) casts from 2 manufacturers: 9 Dreve SLA casts and 9 Scanbiz SLA casts. All casts were then scanned 9 times with an extraoral scanner to produce the reference data set. Trueness was evaluated by superimposing the data sets obtained by scanning the casts with the reference data set. Precision was evaluated by analyzing the deviations among repeated scans. The root mean square (RMS) and percentage of points aligned within the nominal values (±50 μm) of the 3-dimensional analysis were calculated by the software.\n\n\nRESULTS\nGypsum had the best alignment (within 50 μm) with the reference data set (median 95.3%, IQR 16.7) and the least RMS (median 25.8 μm, IQR 14.6), followed by Dreve and Scanbiz. Differences in RMS were observed between gypsum and the SLA casts (P<.001). Within 50 μm, gypsum was superior to Scanbiz (P<.001). Gypsum casts exhibited the highest precision, showing the best alignment (within 50 μm) and the least RMS, followed by Scanbiz and Dreve.\n\n\nCONCLUSIONS\nThis study found that gypsum casts had higher accuracy than SLA casts. Within 50 μm, gypsum casts were better than Scanbiz SLA casts, while gypsum casts and Dreve SLA casts had similar trueness. Significant differences were found among the investigated SLA casts used in the digital workflow.",
"title": ""
},
{
"docid": "be58092e19830b87b5ad73eaf87a528c",
"text": "Moving object detection and tracking (D&T) are important initial steps in object recognition, context analysis and indexing processes for visual surveillance systems. It is a big challenge for researchers to make a decision on which D&T algorithm is more suitable for which situation and/or environment and to determine how accurately object D&T (real-time or non-real-time) is made. There is a variety of object D&T algorithms (i.e. methods) and publications on their performance comparison and evaluation via performance metrics. This paper provides a systematic review of these algorithms and performance measures and assesses their effectiveness via metrics.",
"title": ""
},
{
"docid": "307d9742739cbd2ade98c3d3c5d25887",
"text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).",
"title": ""
},
{
"docid": "bd131d4f68ac8ef3ad2a1226f026322d",
"text": "Keywords: Vehicle fuel economy Eco-driving Human–machine interface Autonomous vehicle Driving simulator analysis a b s t r a c t Motor vehicle powered by regular gasoline is one of major sources of pollutants for local and global environment. The current study developed and validated a new fuel-economy optimization system (FEOS), which receives input from vehicle variables and environment variables (e.g., headway spacing) as input, mathematically computes the optimal acceler-ation/deceleration value with Lagrange multipliers method, and sends the optimal values to drivers via a human-machine interface (HMI) or automatic control systems of autonomous vehicles. FEOS can be used in both free-flow and car-following traffic conditions. An experimental study was conducted to evaluate FEOS. It was found that without sacrificing driver safety, drivers with the aid of FEOS consumed significant less fuel than those without FEOS in all acceleration conditions (22–31% overall gas saving) and the majority of deceleration conditions (12–26% overall gas saving). Compared to relative expensive vehicle engineering system design and improvement, FEOS provides a feasible way to minimize fuel consumptions considering human factors. Applications of the optimal model in the design of both HMI for vehicles with human drivers and autonomous vehicles were discussed. A number of alternatives have been put forward to improve fuel economy of motor vehicles and recently driving behaviors and energy efficient technologies have been seen to offer considerable potential for reducing fuel consumption. Additional while the exploitation of energy efficient technologies may take time to implement and be costly in terms of continuously having to satisfy consumer demands for safety, comfort, space and adequate acceleration and performance encouraging changes in driving behavior can be accomplished relatively quickly. One method to help drivers form appropriate driving behaviors is via the in-vehicle human–machine interface (HMI). For example, van der Voort et al. (2001) develop a fuel-efficiency support tool to present visual advice on optimal gear shifting to maximize fuel economy. Appropriate vehicle pedal operations, however, may contribute more than manual shifting operations to fuel economy (Brundell-Freij and Ericsson, 2005). Further pedal operations are applied for both manual-transmission and automatic-transmission vehicles with human drivers as well as autonomous vehicles, while gear shifting operations are only used for manual-transmission ones. Fuel consumption models have been developed to quantify the relationship between fuel consumption and vehicle characteristics , traffic or road conditions but these, in general, are only able to provide approximate fuel consumption estimates. As the model accuracy …",
"title": ""
},
{
"docid": "2b2c30fa2dc19ef7c16cf951a3805242",
"text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.",
"title": ""
},
{
"docid": "953997d170fa1a4aafe643c328802a30",
"text": "Recently we have developed a new algorithm, PROVEAN (<u>Pro</u>tein <u>V</u>ariation <u>E</u>ffect <u>An</u>alyzer), for predicting the functional effect of protein sequence variations, including single amino acid substitutions and small insertions and deletions [2]. The prediction is based on the change, caused by a given variation, in the similarity of the query sequence to a set of its related protein sequences. For this prediction, the algorithm is required to compute a semi-global pairwise sequence alignment score between the query sequence and each of the related sequences. Using dynamic programming, it takes O(n · m) time to compute alignment score between the query sequence Q of length n and a related sequence S of length m. Thus given l different variations in Q, in a naive way it would take O(l · n · m) time to compute the alignment scores between each of the variant query sequences and S. In this paper, we present a new approach to efficiently compute the pairwise alignment scores for l variations, which takes O((n + l) · m) time when the length of variations is bounded by a constant. In this approach, we further utilize the solutions of overlapping subproblems, which are already used by dynamic programming approach. Our algorithm has been used to build a new database for precomputed prediction scores for all possible single amino acid substitutions, single amino acid insertions, and up to 10 amino acids deletions in about 91K human proteins (including isoforms), where l becomes very large, that is, l = O(n). The PROVEAN source code and web server are available at http://provean.jcvi.org.",
"title": ""
},
{
"docid": "21fb04bbdf23094a5967661787d1f2de",
"text": "We present a practical, stratified autocalibration algorithm with theoretical guarantees of global optimality. Given a projective reconstruction, the first stage of the algorithm upgrades it to affine by estimating the position of the plane at infinity. The plane at infinity is computed by globally minimizing a least squares formulation of the modulus constraints. In the second stage, the algorithm upgrades this affine reconstruction to a metric one by globally minimizing the infinite homography relation to compute the dual image of the absolute conic (DIAC). The positive semidefiniteness of the DIAC is explicitly enforced as part of the optimization process, rather than as a post-processing step. For each stage, we construct and minimize tight convex relaxations of the highly non-convex objective functions in a branch and bound optimization framework. We exploit the problem structure to restrict the search space for the DIAC and the plane at infinity to a small, fixed number of branching dimensions, independent of the number of views. Experimental evidence of the accuracy, speed and scalability of our algorithm is presented on synthetic and real data. MATLAB code for the implementation is made available to the community.",
"title": ""
},
{
"docid": "18c30c601e5f52d5117c04c85f95105b",
"text": "Crohn's disease is a relapsing systemic inflammatory disease, mainly affecting the gastrointestinal tract with extraintestinal manifestations and associated immune disorders. Genome wide association studies identified susceptibility loci that--triggered by environmental factors--result in a disturbed innate (ie, disturbed intestinal barrier, Paneth cell dysfunction, endoplasmic reticulum stress, defective unfolded protein response and autophagy, impaired recognition of microbes by pattern recognition receptors, such as nucleotide binding domain and Toll like receptors on dendritic cells and macrophages) and adaptive (ie, imbalance of effector and regulatory T cells and cytokines, migration and retention of leukocytes) immune response towards a diminished diversity of commensal microbiota. We discuss the epidemiology, immunobiology, amd natural history of Crohn's disease; describe new treatment goals and risk stratification of patients; and provide an evidence based rational approach to diagnosis (ie, work-up algorithm, new imaging methods [ie, enhanced endoscopy, ultrasound, MRI and CT] and biomarkers), management, evolving therapeutic targets (ie, integrins, chemokine receptors, cell-based and stem-cell-based therapies), prevention, and surveillance.",
"title": ""
},
{
"docid": "786540fad61e862657b778eb57fe1b24",
"text": "OBJECTIVE\nTo compare pharmacokinetics (PK) and pharmacodynamics (PD) of insulin glargine in type 2 diabetes mellitus (T2DM) after evening versus morning administration.\n\n\nRESEARCH DESIGN AND METHODS\nTen T2DM insulin-treated persons were studied during 24-h euglycemic glucose clamp, after glargine injection (0.4 units/kg s.c.), either in the evening (2200 h) or the morning (1000 h).\n\n\nRESULTS\nThe 24-h glucose infusion rate area under the curve (AUC0-24h) was similar in the evening and morning studies (1,058 ± 571 and 995 ± 691 mg/kg × 24 h, P = 0.503), but the first 12 h (AUC0-12h) was lower with evening versus morning glargine (357 ± 244 vs. 593 ± 374 mg/kg × 12 h, P = 0.004), whereas the opposite occurred for the second 12 h (AUC12-24h 700 ± 396 vs. 403 ± 343 mg/kg × 24 h, P = 0.002). The glucose infusion rate differences were totally accounted for by different rates of endogenous glucose production, not utilization. Plasma insulin and C-peptide levels did not differ in evening versus morning studies. Plasma glucagon levels (AUC0-24h 1,533 ± 656 vs. 1,120 ± 344 ng/L/h, P = 0.027) and lipolysis (free fatty acid AUC0-24h 7.5 ± 1.6 vs. 8.9 ± 1.9 mmol/L/h, P = 0.005; β-OH-butyrate AUC0-24h 6.8 ± 4.7 vs. 17.0 ± 11.9 mmol/L/h, P = 0.005; glycerol, P < 0.020) were overall more suppressed after evening versus morning glargine administration.\n\n\nCONCLUSIONS\nThe PD of insulin glargine differs depending on time of administration. With morning administration insulin activity is greater in the first 0-12 h, while with evening administration the activity is greater in the 12-24 h period following dosing. However, glargine PK and plasma C-peptide levels were similar, as well as glargine PD when analyzed by 24-h clock time independent of the time of administration. Thus, the results reflect the impact of circadian changes in insulin sensitivity in T2DM (lower in the night-early morning vs. afternoon hours) rather than glargine per se.",
"title": ""
},
{
"docid": "7fece61e99d0b461b04bcf0dfa81639d",
"text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.",
"title": ""
},
{
"docid": "0e02a468a65909b93d3876f30a247ab1",
"text": "Implant therapy can lead to peri-implantitis, and none of the methods used to treat this inflammatory response have been predictably effective. It is nearly impossible to treat infected surfaces such as TiUnite (a titanium oxide layer) that promote osteoinduction, but finding an effective way to do so is essential. Experiments were conducted to determine the optimum irradiation power for stripping away the contaminated titanium oxide layer with Er:YAG laser irradiation, the degree of implant heating as a result of Er:YAG laser irradiation, and whether osseointegration was possible after Er:YAG laser microexplosions were used to strip a layer from the surface of implants placed in beagle dogs. The Er:YAG laser was effective at removing an even layer of titanium oxide, and the use of water spray limited heating of the irradiated implant, thus protecting the surrounding bone tissue from heat damage.",
"title": ""
},
{
"docid": "d2e25c512717399fdace99bf640c8843",
"text": "Credit card is one of the electronic payment mode and the fraud is committing of use of credit card in a fraudulent way either using credit or debit card. The purpose can be solved by purchasing the accessories without paying and giving unauthorized way of payment from account. In this paper, we are proposing the algorithm with the combined approach of hidden markov model and gentic algorithm of the data mining techniques and get the better result with respective of the individual approaches. Furthermore, there performance in term of precision, recall, F-measure has also increased comparative to the the state-of-the-art papers included in",
"title": ""
}
] |
scidocsrr
|
9b46d8b998dcaec1f2d5cebb6b5ff4bb
|
Light scattering from human hair fibers
|
[
{
"docid": "7f66cfc591970b3e90c54223cf8cf160",
"text": "A reflection and refraction model for anisotropic surfaces is introduced. The anisotropy is simulated by small cylinders (added or subtracted) distributed on the anisotropic surface. Different levels of anisotropy are achieved by varying the distance between each cylinder and/or rising the cylinders more or less from the surface. Multidirectional anisotropy is modelled by orienting groups of cylinders in different direction. The intensity of the reflected light is computed by determining the visible and illuminated portion of the cylinders, taking self-blocking into account. We present two techniques to compute this in practice. In one the intensity is computed by sampling the surface of the cylinders. The other is an analytic solution. In the case of the diffuse component, the solution is exact. In the case of the specular component, an approximation is developed using a Chebyshev polynomial approximation of the specular term, and integrating the polynomial.This model can be implemented easily within most rendering system, given a suitable mechanism to define and alter surface tangents. The effectiveness of the model and the visual importance of anisotropy are illustrated with some pictures.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "e066f0670583195b9ad2f3c888af1dd2",
"text": "Deep learning has received much attention as of the most powerful approaches for multimodal representation learning in recent years. An ideal model for multimodal data can reason about missing modalities using the available ones, and usually provides more information when multiple modalities are being considered. All the previous deep models contain separate modality-specific networks and find a shared representation on top of those networks. Therefore, they only consider high level interactions between modalities to find a joint representation for them. In this paper, we propose a multimodal deep learning framework (MDLCW) that exploits the cross weights between representation of modalities, and try to gradually learn interactions of the modalities in a deep network manner (from low to high level interactions). Moreover, we theoretically show that considering these interactions provide more intra-modality information, and introduce a multi-stage pre-training method that is based on the properties of multi-modal data. In the proposed framework, as opposed to the existing deep methods for multi-modal data, we try to reconstruct the representation of each modality at a given level, with representation of other modalities in the previous layer. Extensive experimental results show that the proposed model outperforms state-of-the-art information retrieval methods for both image and text queries on the PASCAL-sentence and SUN-Attribute databases.",
"title": ""
},
{
"docid": "9422f8c85859aca10e7d2a673b0377ba",
"text": "Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.",
"title": ""
},
{
"docid": "18df6df67ced4564b3873d487a25f2d9",
"text": "The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. A key issue is that the neural network training problem is nonconvex, hence optimization algorithms may not return a global minima. This paper provides sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Our conditions require both the network output and the regularization to be positively homogeneous functions of the network parameters, with the regularization being designed to control the network size. Our results apply to networks with one hidden layer, where size is measured by the number of neurons in the hidden layer, and multiple deep subnetworks connected in parallel, where size is measured by the number of subnetworks.",
"title": ""
},
{
"docid": "d5d96493b34cfbdf135776e930ec5979",
"text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.",
"title": ""
},
{
"docid": "fcef7ce729a08a5b8c6ed1d0f2d53633",
"text": "Community question-answering (CQA) systems, such as Yahoo! Answers or Stack Overflow, belong to a prominent group of successful and popular Web 2.0 applications, which are used every day by millions of users to find an answer on complex, subjective, or context-dependent questions. In order to obtain answers effectively, CQA systems should optimally harness collective intelligence of the whole online community, which will be impossible without appropriate collaboration support provided by information technologies. Therefore, CQA became an interesting and promising subject of research in computer science and now we can gather the results of 10 years of research. Nevertheless, in spite of the increasing number of publications emerging each year, so far the research on CQA systems has missed a comprehensive state-of-the-art survey. We attempt to fill this gap by a review of 265 articles published between 2005 and 2014, which were selected from major conferences and journals. According to this evaluation, at first we propose a framework that defines descriptive attributes of CQA approaches. Second, we introduce a classification of all approaches with respect to problems they are aimed to solve. The classification is consequently employed in a review of a significant number of representative approaches, which are described by means of attributes from the descriptive framework. As a part of the survey, we also depict the current trends as well as highlight the areas that require further attention from the research community.",
"title": ""
},
{
"docid": "85e5eb2818b46f7dc571600486aa10d6",
"text": "Electronic commerce is an increasingly popular business model with a wide range of tools available to firms. An application that is becoming more common is the use of self-service technologies (SSTs), such as telephone banking, automated hotel checkout, and online investment trading, whereby customers produce services for themselves without assistance from firm employees. Widespread introduction of SSTs is apparent across industries, yet relatively little is known about why customers decide to try SSTs and why some SSTs are more widely accepted than others. In this research, the authors explore key factors that influence the initial SST trial decision, specifically focusing on actual behavior in situations in which the consumer has a choice among delivery modes. The authors show that the consumer readiness variables of role clarity, motivation, and ability are key mediators between established adoption constructs (innovation characteristics and individual differences) and the likelihood of trial.",
"title": ""
},
{
"docid": "545bd32c5c64eed3b780768e1862168a",
"text": "This position paper discusses AI challenges in the area of real–time strategy games and presents a research agenda aimed at improving AI performance in these popular multi– player computer games. RTS Games and AI Research Real–time strategy (RTS) games such as Blizzard Entertainment’s Starcraft(tm) and Warcraft(tm) series form a large and growing part of the multi–billion dollar computer games industry. In these games several players fight over resources, which are scattered over a terrain, by first setting up economies, building armies, and ultimately trying to eliminate all enemy units and buildings. The current AI performance in commercial RTS games is poor. The main reasons why the AI performance in RTS games is lagging behind developments in related areas such as classic board games are the following: • RTS games feature hundreds or even thousands of interacting objects, imperfect information, and fast–paced micro–actions. By contrast, World–class game AI systems mostly exist for turn–based perfect information games in which the majority of moves have global consequences and human planning abilities therefore can be outsmarted by mere enumeration. • Video games companies create titles under severe time constraints and do not have the resources and incentive (yet) to engage in AI research. • Multi–player games often do not require World–class AI performance in order to be commercially successful as long as there are enough human players interested in playing the game on–line. • RTS games are complex which means that it is not easy to set up an RTS game infrastructure for conducting AI experiments. Closed commercial RTS game software without AI interfaces does not help, either. The result is a lack of AI competition in this area which in the classic games sector is one of the most important driving forces of AI research. Copyright c © 2004, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. To get a feeling for the vast complexity of RTS games, imagine to play chess on a 512×512 board with hundreds of slow simultaneously moving pieces, player views restricted to small areas around their own pieces, and the ability to gather resources and create new material. While human players sometimes struggle with micro– managing all their objects, it is the incremental nature of the actions that allows them to outperform any existing RTS game AI. The difference to classic abstract games like chess and Othello in this respect is striking: many moves in these games have immediate global effects. This makes it hard for human players to consider deep variations with all their consequences. On the other hand, computers programs conducting full–width searches with selective extensions excel in complex combinatorial situations. A notable exception is the game of go in which — like in RTS games — moves often have only incremental effects and today’s best computer programs are still easily defeated by amateurs (Müller 2002). It is in these domains where the human abilities to abstract, generalize, reason, learn, and plan shine and the current commercial RTS AI systems — which do not reason nor adapt — fail. Other arguments in favor of AI research in RTS games are: • (RTS) games constitute well–defined environments to conduct experiments in and offer straight–forward objective ways of measuring performance, • RTS games can be tailored to focus on specific aspects such as how to win local fights, how to scout effectively, how to build, attack, and defend a town, etc., • Strong game AI will likely make a difference in future commercial games because graphics improvements are beginning to saturate. Furthermore, smarter robot enemies and allies definitely add to the game experience as they are available 24 hours a day and do not get tired. • The current state of RTS game AI is so bad that there are a lot of low–hanging fruits waiting to be picked. Examples include research on smart game interfaces that alleviate human players from tedious tasks such as manually concentrating fire in combat. Game AI can also help in the development of RTS games — for instance by providing tools for unit balancing. • Finally, progress in RTS game AI is also of interest for the military which uses battle simulations in training programs (Herz & Macedonia 2002) and also pursues research into autonomous weapon systems.",
"title": ""
},
{
"docid": "a3afea380667f2f088f37ae9127fb05a",
"text": "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "6b7bc505296093ded055e96bb344b42a",
"text": "Cellular network operators are always seeking to increase the area of coverage of their networks, open up new markets and provide services to potential customers in remote rural areas. However, increased energy consumption, operator energy cost and the potential environmental impact of increased greenhouse gas emissions and the exhaustion of non-renewable energy resources (fossil fuel) pose major challenges to cellular network operators. The specific power supply needs for rural base stations (BSs) such as cost-effectiveness, efficiency, sustainability and reliability can be satisfied by taking advantage of the technological advances in renewable energy. This study investigates the possibility of decreasing both operational expenditure (OPEX) and greenhouse gas emissions with guaranteed sustainability and reliability for rural BSs using a solar photovoltaic/diesel generator hybrid power system. Three key aspects have been investigated: (i) energy yield, (ii) economic factors and (iii) greenhouse gas emissions. The results showed major benefits for mobile operators in terms of both environmental conservation and OPEX reduction, with an average annual OPEX savings of 43% to 47% based on the characteristics of solar radiation exposure in Malaysia. Finally, the paper compares the feasibility of using the proposed approach in a four-season country and compares the results against results obtained in Malaysia, which is a country with a tropical climate.",
"title": ""
},
{
"docid": "41e04cbe2ca692cb65f2909a11a4eb5b",
"text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This mechanism provides a probabilistic guarantee that transactions will not be reversed once they are sufficiently deep in the blockchain, assuming an attacker controls a bounded fraction of mining power in the network. We show, however, that when miners are rational this guarantee can be undermined by a whale attack in which an attacker issues an off-theblockchain whale transaction with an anomalously large transaction fee in an effort to convince miners to fork the current chain. We carry out a game-theoretic analysis and simulation of this attack, and show conditions under which it yields an expected positive payoff for the attacker.",
"title": ""
},
{
"docid": "b08f67bc9b84088f8298b35e50d0b9c5",
"text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.",
"title": ""
},
{
"docid": "ba7b51dc253da1a17aaf12becb1abfed",
"text": "This papers aims to design a new approach in order to increase the performance of the decision making in model-based fault diagnosis when signature vectors of various faults are identical or closed. The proposed approach consists on taking into account the knowledge issued from the reliability analysis and the model-based fault diagnosis. The decision making, formalised as a bayesian network, is established with a priori knowledge on the dynamic component degradation through Markov chains. The effectiveness and performances of the technique are illustrated on a heating water process corrupted by faults. Copyright © 2006 IFAC",
"title": ""
},
{
"docid": "00357ea4ef85efe5cd2080e064ddcd06",
"text": "The cumulative match curve (CMC) is used as a measure of 1: m identification system performance. It judges the ranking capabilities of an identification system. The receiver operating characteristic curve (ROC curve) of a verification system, on the other hand, expresses the quality of a 1:1 matcher. The ROC plots the false accept rate (FAR) of a 1:1 matcher versus the false reject rate (FRR) of the matcher. We show that the CMC is also related to the FAR and FRR of a 1:1 matcher, i.e., the matcher that is used to rank the candidates by sorting the scores. This has as a consequence that when a 1:1 matcher is used for identification, that is, for sorting match scores from high to low, the CMC does not offer any additional information beyond the FAR and FRR curves. The CMC is just another way of displaying the data and can be computed from the FAR and FRR.",
"title": ""
},
{
"docid": "18f8d1fef840c1a4441b5949d6b97d9e",
"text": "Geospatial web service of agricultural information has a wide variety of consumers. An operational agricultural service will receive considerable requests and process a huge amount of datasets each day. To ensure the service quality, many strategies have to be taken during developing and deploying agricultural information services. This paper presents a set of methods to build robust geospatial web service for agricultural information extraction and sharing. The service is designed to serve the public and handle heavy-load requests for a long-lasting term with least maintenance. We have developed a web service to validate our approach. The service is used to serve more than 10 TB data product of agricultural drought. The performance is tested. The result shows that the service has an excellent response time and the use of system resources is stable. We have plugged the service into an operational system for global drought monitoring. The statistics and feedbacks show our approach is feasible and efficient in operational web systems.",
"title": ""
},
{
"docid": "b205efe2ce90ec2ee3a394dd01202b60",
"text": "Recurrent Neural Networks (RNNs) is a sub type of neural networks that use feedback connections. Several types of RNN models are used in predicting financial time series. This study was conducted to develop models to predict daily stock prices of selected listed companies of Colombo Stock Exchange (CSE) based on Recurrent Neural Network (RNN) Approach and to measure the accuracy of the models developed and identify the shortcomings of the models if present. Feedforward, Simple Recurrent Neural Network (SRNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM) architectures were employed in building models. Closing, High and Low prices of past two days were selected as input variables for each company. Feedforward networks produce the highest and lowest forecasting errors. The forecasting accuracy of the best feedforward networks is approximately 99%. SRNN and LSTM networks generally produce lower errors compared with feedforward networks but in some occasions, the error is higher than feed forward networks. Compared to other two networks, GRU networks are having comparatively higher forecasting errors.",
"title": ""
},
{
"docid": "29549f0cb8b45d6b39e58c9a9237431f",
"text": "Over the past 5 years, the advent of echocardiographic screening for rheumatic heart disease (RHD) has revealed a higher RHD burden than previously thought. In light of this global experience, the development of new international echocardiographic guidelines that address the full spectrum of the rheumatic disease process is opportune. Systematic differences in the reporting of and diagnostic approach to RHD exist, reflecting differences in local experience and disease patterns. The World Heart Federation echocardiographic criteria for RHD have, therefore, been developed and are formulated on the basis of the best available evidence. Three categories are defined on the basis of assessment by 2D, continuous-wave, and color-Doppler echocardiography: 'definite RHD', 'borderline RHD', and 'normal'. Four subcategories of 'definite RHD' and three subcategories of 'borderline RHD' exist, to reflect the various disease patterns. The morphological features of RHD and the criteria for pathological mitral and aortic regurgitation are also defined. The criteria are modified for those aged over 20 years on the basis of the available evidence. The standardized criteria aim to permit rapid and consistent identification of individuals with RHD without a clear history of acute rheumatic fever and hence allow enrollment into secondary prophylaxis programs. However, important unanswered questions remain about the importance of subclinical disease (borderline or definite RHD on echocardiography without a clinical pathological murmur), and about the practicalities of implementing screening programs. These standardized criteria will help enable new studies to be designed to evaluate the role of echocardiographic screening in RHD control.",
"title": ""
}
] |
scidocsrr
|
d733e2c69669815f4740c8c59aa23382
|
Median Filtering Forensics Based on Convolutional Neural Networks
|
[
{
"docid": "27ad413fa5833094fb2e557308fa761d",
"text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.",
"title": ""
},
{
"docid": "38d7565371e8faede8ed06e6623bb40a",
"text": "Exposing the processing history of a digital image is an important problem for forensic analyzers and steganalyzers. As the median filter is a popular nonlinear denoising operator, the blind forensics of median filtering is particularly interesting. This paper proposes a novel approach for detecting median filtering in digital images, which can 1) accurately detect median filtering in arbitrary images, even reliably detect median filtering in low-resolution and JPEG compressed images; and 2) reliably detect tampering when part of a median-filtered image is inserted into a nonmedian-filtered image, or vice versa. The effectiveness of the proposed approach is exhaustively evaluated in five different image databases.",
"title": ""
},
{
"docid": "2a56702663e6e52a40052a5f9b79a243",
"text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"title": ""
}
] |
[
{
"docid": "c5dfef21843d2cc1893ec1dc88787050",
"text": "Automatic synthesis of faces from visual attributes is an important problem in computer vision and has wide applications in law enforcement and entertainment. With the advent of deep generative convolutional neural networks (CNNs), attempts have been made to synthesize face images from attributes and text descriptions. In this paper, we take a different approach, where we formulate the original problem as a stage-wise learning problem. We first synthesize the facial sketch corresponding to the visual attributes and then we reconstruct the face image based on the synthesized sketch. The proposed Attribute2Sketch2Face framework, which is based on a combination of deep Conditional Variational Autoencoder (CVAE) and Generative Adversarial Networks (GANs), consists of three stages: (1) Synthesis of facial sketch from attributes using a CVAE architecture, (2) Enhancement of coarse sketches to produce sharper sketches using a GANbased framework, and (3) Synthesis of face from sketch using another GAN-based network. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed attributebased three stage face synthesis method.",
"title": ""
},
{
"docid": "226882264b7582aeb1769ab49952fe37",
"text": "A novel method aimed at reducing radar cross section (RCS) under incident waves with both x- and y-polarizations, with the radiation characteristics of the antenna preserved, is presented and investigated. The goal is accomplished by the implementation of the polarization conversion metamaterial (PCM) and the principle of passive cancellation. As a test case, a microstrip patch antenna is simulated and experimentally measured to demonstrate the proposed strategy for dramatic radar cross section reduction (RCSR). Results exhibit that in-band RCSR is as much as 16 dB compared to the reference antenna. In addition, the PCM has a contribution to a maximum RCSR value of 14 dB out of the operating band. With significant RCSR and unobvious effect on the radiation performance of the antenna, the proposed method has a wide application for the design of other antennas with a requirement of RCS control.",
"title": ""
},
{
"docid": "cabc169975aac3d22df5811992ca38a8",
"text": "A literature review was conducted, using the computerized “Online Mendelian Inheritance in Man” (OMIM) and PubMed, to identify inborn errors of metabolism (IEM) in which psychosis may be a predominant feature or the initial presenting symptom. Different combinations of the following keywords were searched using OMIM: “psychosis”, “schizophrenia”, or “hallucinations” and “metabolic”, “inborn error of metabolism”, “inborn errors of metabolism”, “biochemical genetics”, or “metabolic genetics”. The OMIM search generated 126 OMIM entries, 40 of which were well known IEM. After removing IEM lacking evidence in PubMed for an association with psychosis, 29 OMIM entries were identified. Several of these IEM are treatable. They involve different small organelles (lysosomes, peroxisomes, mitochondria), iron or copper accumulation, as well as defects in other met-abolic pathways (e.g., defects leading to hyperammonemia or homocystinemia). A clinical checklist summarizing the key features of these conditions and a guide to clinical approach are provided. The genes corresponding to each of these con-ditions were identified. Whole exome data from 2545 adult cases with schizophrenia and 2545 unrelated controls, accessed via the Database of Genotypes and Phenotypes (dbGaP), were analyzed for rare functional variants in these genes. The odds ratio of having a rare functional variant in cases versus controls was calculated for each gene. Eight genes are significantly associated with schizophrenia (p < 0.05, OR >1) using an unselected group of adult patients with schizophrenia. Increased awareness of clinical clues for these IEM will optimize referrals and timely metabolic interventions.",
"title": ""
},
{
"docid": "33b8012ae66f07c9de158f4c514c4e99",
"text": "Many mathematicians have a dismissive attitude towards paradoxes. This is unfortunate, because many paradoxes are rich in content, having connections with serious mathematical ideas as well as having pedagogical value in teaching elementary logical reasoning. An excellent example is the so-called “surprise examination paradox” (described below), which is an argument that seems at first to be too silly to deserve much attention. However, it has inspired an amazing variety of philosophical and mathematical investigations that have in turn uncovered links to Gödel’s incompleteness theorems, game theory, and several other logical paradoxes (e.g., the liar paradox and the sorites paradox). Unfortunately, most mathematicians are unaware of this because most of the literature has been published in philosophy journals.",
"title": ""
},
{
"docid": "c14763b69b668ec8a999467e2a03ca73",
"text": "Item Response Theory is based on the application of related mathematical models to testing data. Because it is generally regarded as superior to classical test theory, it is the preferred method for developing scales, especially when optimal decisions are demanded, as in so-called high-stakes tests. The term item is generic: covering all kinds of informative item. They might be multiple choice questions that have incorrect and correct responses, but are also commonly statements on questionnaires that allow respondents to indicate level of agreement (a rating or Likert scale), or patient symptoms scored as present/absent, or diagnostic information in complex systems. IRT is based on the idea that the probability of a correct/keyed response to an item is a mathematical function of person and item parameters. The person parameter is construed as (usually) a single latent trait or dimension. Examples include general intelligence or the strength of an attitude.",
"title": ""
},
{
"docid": "04b9ced45b041360234256159cb41d95",
"text": "Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL). For that reason, the recent result from OpenAI showing that a particular kind of evolution strategy (ES) can rival the performance of SGD-based deep RL methods with large neural networks provoked surprise. This result is difficult to interpret in part because of the lingering ambiguity on how ES actually relates to SGD. The aim of this paper is to significantly reduce this ambiguity through a series of MNIST-based experiments designed to uncover their relationship. As a simple supervised problem without domain noise (unlike in most RL), MNIST makes it possible (1) to measure the correlation between gradients computed by ES and SGD and (2) then to develop an SGD-based proxy that accurately predicts the performance of different ES population sizes. These innovations give a new level of insight into the real capabilities of ES, and lead also to some unconventional means for applying ES to supervised problems that shed further light on its differences from SGD. Incorporating these lessons, the paper concludes by demonstrating that ES can achieve 99% accuracy on MNIST, a number higher than any previously published result for any evolutionary method. While not by any means suggesting that ES should substitute for SGD in supervised learning, the suite of experiments herein enables more informed decisions on the application of ES within RL and other paradigms.",
"title": ""
},
{
"docid": "58df7e4be17aa19eccdfa61c26d7873b",
"text": "Temperature-related studies were conducted on Drosophila suzukii Matsumura (Diptera: Drosophilidae: Drosophilini). From 10-28°C, temperature had a significant impact on blueberries, Vaccinium corymbosum L. (Ericales: Ericaceae), and cherries, Prunus avium (L.) L. 1755 (Rosales: Rosaceae), important commercial hosts of D. suzukii. Temperature had a significant influence on D. suzukii developmental period, survival, and fecundity, with decreasing developmental periods as temperatures increased to 28°C. At 30°C, the highest temperature tested, development periods increased, indicating that above this temperature the developmental extremes for the species were approached. D. suzukii reared on blueberries had lower fecundity than reared on cherries at all temperatures where reproduction occurred. The highest net reproductive rate (R(o)) and intrinsic rate of population increase (r(m)) were recorded on cherries at 22°C and was 195.1 and 0.22, respectively. Estimations using linear and nonlinear fit for the minimum, optimal, and maximum temperatures where development can take place were respectively, 7.2, 28.1, and 42.1°C. The r(m) values were minimal, optimal, and maximal at 13.4, 21.0, and 29.3°C, respectively. Our laboratory cultures of D. suzukii displayed high rates of infection for Wolbachia spp. (Rickettsiales: Rickettsiaceae), and this infection may have impacted fecundity found in this study. A temperature-dependent matrix population estimation model using fecundity and survival data were run to determine whether these data could predict D. suzukii pressure based on environmental conditions. The model was applied to compare the 2011 and 2012 crop seasons in an important cherry production region. Population estimates using the model explained different risk levels during the key cherry harvest period between these seasons.",
"title": ""
},
{
"docid": "687ecd0aadd3c29d641827f7c43e91cd",
"text": "When different stakeholders share a common resource, such as the case in spectrum sharing, security and enforcement become critical considerations that affect the welfare of all stakeholders. Recent advances in radio spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for significantly improving spectrum utilization efficiency. However, those technologies have also contributed to exacerbating the difficult problems of security and enforcement. In this paper, we review some of the critical security and privacy threats that impact spectrum sharing. We propose a taxonomy for classifying the various threats, and describe representative examples for each threat category. We also discuss threat countermeasures and enforcement techniques, which are discussed in the context of two different approaches: ex ante (preventive) and ex post (punitive) enforcement.",
"title": ""
},
{
"docid": "c03a0bd78edcb7ebde0321ca7479853d",
"text": "The evolution of speech can be studied independently of the evolution of language, with the advantage that most aspects of speech acoustics, physiology and neural control are shared with animals, and thus open to empirical investigation. At least two changes were necessary prerequisites for modern human speech abilities: (1) modification of vocal tract morphology, and (2) development of vocal imitative ability. Despite an extensive literature, attempts to pinpoint the timing of these changes using fossil data have proven inconclusive. However, recent comparative data from nonhuman primates have shed light on the ancestral use of formants (a crucial cue in human speech) to identify individuals and gauge body size. Second, comparative analysis of the diverse vertebrates that have evolved vocal imitation (humans, cetaceans, seals and birds) provides several distinct, testable hypotheses about the adaptive function of vocal mimicry. These developments suggest that, for understanding the evolution of speech, comparative analysis of living species provides a viable alternative to fossil data. However, the neural basis for vocal mimicry and for mimesis in general remains unknown.",
"title": ""
},
{
"docid": "5785ad50b61fb6287eeab5f43b3cbf66",
"text": "The design of new HEVC extensions comes with the need for careful analysis of internal HEVC codec decisions. Several bitstream analyzers have evolved for this purpose and provide a visualization of encoder decisions as seen from a decoder viewpoint. None of the existing solutions is able to provide actual insight into the encoder and its RDO decision process. With one exception, all solutions are closed source and make adaption of their code to specific implementation needs impossible. Overall, development with the HM code base remains a time-consuming task. Here, we present the HEVC Analyzer for Rapid Prototyping (HARP), which directly addresses the above issues and is freely available under www.lms.lnt.de/HARP.",
"title": ""
},
{
"docid": "88bb56e36c493ed2ac723acbc6090f2b",
"text": "In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with an arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, the positions of point samples are approximated by the geometry centers of all tree-front cells, whereas normals and colors are approximated by their statistical average within each of the tree-front cells. With this framework, we employ attribute-dependent encoding techniques to exploit the different characteristics of various attributes. All of these have led to a significant improvement in the rate-distortion (R-D) performance and a computational advantage over the state of the art. Furthermore, given sufficient levels of octree expansion, normal space partitioning, and resolution of color quantization, the proposed point cloud encoder can be potentially used for lossless coding of 3D point clouds.",
"title": ""
},
{
"docid": "666137f1b598a25269357d6926c0b421",
"text": "representation techniques. T he World Wide Web is possible because a set of widely established standards guarantees interoperability at various levels. Until now, the Web has been designed for direct human processing, but the next-generation Web, which Tim Berners-Lee and others call the “Semantic Web,” aims at machine-processible information.1 The Semantic Web will enable intelligent services—such as information brokers, search agents, and information filters—which offer greater functionality and interoperability than current stand-alone services. The Semantic Web will only be possible once further levels of interoperability have been established. Standards must be defined not only for the syntactic form of documents, but also for their semantic content. Notable among recent W3C standardization efforts are XML/XML schema and RDF/RDF schema, which facilitate semantic interoperability. In this article, we explain the role of ontologies in the architecture of the Semantic Web. We then briefly summarize key elements of XML and RDF, showing why using XML as a tool for semantic interoperability will be ineffective in the long run. We argue that a further representation and inference layer is needed on top of the Web’s current layers, and to establish such a layer, we propose a general method for encoding ontology representation languages into RDF/RDF schema. We illustrate the extension method by applying it to Ontology Interchange Language (OIL), an ontology representation and inference language.2",
"title": ""
},
{
"docid": "9b40db1e69a3ad1cc2a1289791e82ae1",
"text": "As a nascent area of study, gamification has attracted the interest of researchers in several fields, but such researchers have scarcely focused on creating a theoretical foundation for gamification research. Gamification involves using gamelike features in non-game contexts to motivate users and improve performance outcomes. As a boundary-spanning subject by nature, gamification has drawn the interest of scholars from diverse communities, such as information systems, education, marketing, computer science, and business administration. To establish a theoretical foundation, we need to clearly define and explain gamification in comparison with similar concepts and areas of research. Likewise, we need to define the scope of the domain and develop a research agenda that explicitly considers theory’s important role. In this review paper, we set forth the pre-theoretical structures necessary for theory building in this area. Accordingly, we engaged an interdisciplinary group of discussants to evaluate and select the most relevant theories for gamification. Moreover, we developed exemplary research questions to help create a research agenda for gamification. We conclude that using a multi-theoretical perspective in creating a research agenda should help and encourage IS researchers to take a lead role in this promising and emerging area.",
"title": ""
},
{
"docid": "af48f00757d8e95d92facca57cd9d13c",
"text": "Remaining useful life (RUL) prediction allows for predictive maintenance of machinery, thus reducing costly unscheduled maintenance. Therefore, RUL prediction of machinery appears to be a hot issue attracting more and more attention as well as being of great challenge. This paper proposes a model-based method for predicting RUL of machinery. The method includes two modules, i.e., indicator construction and RUL prediction. In the first module, a new health indicator named weighted minimum quantization error is constructed, which fuses mutual information from multiple features and properly correlates to the degradation processes of machinery. In the second module, model parameters are initialized using the maximum-likelihood estimation algorithm and RUL is predicted using a particle filtering-based algorithm. The proposed method is demonstrated using vibration signals from accelerated degradation tests of rolling element bearings. The prediction result identifies the effectiveness of the proposed method in predicting RUL of machinery.",
"title": ""
},
{
"docid": "37a6d22411148cde4be4cb5a4dfe8bde",
"text": "When you write papers, how many times do you want to make some citations at a place but you are not sure which papers to cite? Do you wish to have a recommendation system which can recommend a small number of good candidates for every place that you want to make some citations? In this paper, we present our initiative of building a context-aware citation recommendation system. High quality citation recommendation is challenging: not only should the citations recommended be relevant to the paper under composition, but also should match the local contexts of the places citations are made. Moreover, it is far from trivial to model how the topic of the whole paper and the contexts of the citation places should affect the selection and ranking of citations. To tackle the problem, we develop a context-aware approach. The core idea is to design a novel non-parametric probabilistic model which can measure the context-based relevance between a citation context and a document. Our approach can recommend citations for a context effectively. Moreover, it can recommend a set of citations for a paper with high quality. We implement a prototype system in CiteSeerX. An extensive empirical evaluation in the CiteSeerX digital library against many baselines demonstrates the effectiveness and the scalability of our approach.",
"title": ""
},
{
"docid": "dbcfb877dae759f9ad1e451998d8df38",
"text": "Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.",
"title": ""
},
{
"docid": "b8a684360c407cdc9d89183985a7f187",
"text": "In an increasingly digital world, identifying signs of online extremism sits at the top of the priority list for counter-extremist agencies. Researchers and governments are investing in the creation of advanced information technologies to identify and counter extremism through intelligent large-scale analysis of online data. However, to the best of our knowledge, these technologies are neither based on, nor do they take advantage of, the existing theories and studies of radicalisation. In this paper we propose a computational approach for detecting and predicting the radicalisation influence a user is exposed to, grounded on the notion of 'roots of radicalisation' from social science models. This approach has been applied to analyse and compare the radicalisation level of 112 pro-ISIS vs.112 \"general\" Twitter users. Our results show the effectiveness of our proposed algorithms in detecting and predicting radicalisation influence, obtaining up to 0.9 F-1 measure for detection and between 0.7 and 0.8 precision for prediction. While this is an initial attempt towards the effective combination of social and computational perspectives, more work is needed to bridge these disciplines, and to build on their strengths to target the problem of online radicalisation.",
"title": ""
},
{
"docid": "20d186b7db540be57492daa805b51b31",
"text": "Printability, the capability of a 3D printer to closely reproduce a 3D model, is a complex decision involving several geometrical attributes like local thickness, shape of the thin regions and their surroundings, and topology with respect to thin regions. We present a method for assessment of 3D shape printability which efficiently and effectively computes such attributes. Our method uses a simple and efficient voxel-based representation and associated computations. Using tools from multi-scale morphology and geodesic analysis, we propose several new metrics for various printability problems. We illustrate our method with results taken from a real-life application.",
"title": ""
},
{
"docid": "b6d655df161d6c47675e9cb17173a521",
"text": "Nigeria is considered as one of the many countries in sub-Saharan Africa with a weak economy and gross deficiencies in technology and engineering. Available data from international monitoring and regulatory organizations show that technology is pivotal to determining the economic strengths of nations all over the world. Education is critical to technology acquisition, development, dissemination and adaptation. Thus, this paper seeks to critically assess and discuss issues and challenges facing technological advancement in Nigeria, particularly in the education sector, and also proffers solutions to resuscitate the Nigerian education system towards achieving national technological and economic sustainability such that Nigeria can compete favourably with other technologicallydriven economies of the world in the not-too-distant future. Keywords—Economically weak countries, education, globalization and competition, technological advancement.",
"title": ""
}
] |
scidocsrr
|
56db6ecda070b849e7b21a92bc018262
|
Learning Causal Graphs with Small Interventions
|
[
{
"docid": "959ba9c0929e36a8ef4a22a455ed947a",
"text": "The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.",
"title": ""
},
{
"docid": "b9c74367d813c8b821505bfea2c5946e",
"text": "This paper presents correct algorithms for answering the following two questions; (i) Does there exist a causal explanation con sistent with a set of background knowledge which explains all of the observed indepen dence facts in a sample? (ii) Given that there is such a causal explanation what are the causal relationships common to every such",
"title": ""
}
] |
[
{
"docid": "d65a047b3f381ca5039d75fd6330b514",
"text": "This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.",
"title": ""
},
{
"docid": "487d1c9aa22c605d619414ecce3661bd",
"text": "Formation of dental caries is caused by the colonization and accumulation of oral microorganisms and extracellular polysaccharides that are synthesized from sucrose by glucosyltransferase of Streptococcus mutans. The production of glucosyltransferase from oral microorganisms was attempted, and it was found that Streptococcus mutans produced highest activity of the enzyme. Ethanolic extracts of propolis (EEP) were examined whether EEP inhibit the enzyme activity and growth of the bacteria or not. All EEP from various regions in Brazil inhibited both glucosyltransferase activity and growth of S. mutans, but one of the propolis from Rio Grande do Sul (RS2) demonstrated the highest inhibition of the enzyme activity and growth of the bacteria. It was also found that propolis (RS2) contained the highest concentrations of pinocembrin and galangin.",
"title": ""
},
{
"docid": "e162fcb6b897e941cd26558f4ed16cd5",
"text": "In this paper, we propose a novel real-valued time-delay neural network (RVTDNN) suitable for dynamic modeling of the baseband nonlinear behaviors of third-generation (3G) base-station power amplifiers (PA). Parameters (weights and biases) of the proposed model are identified using the back-propagation algorithm, which is applied to the input and output waveforms of the PA recorded under real operation conditions. Time- and frequency-domain simulation of a 90-W LDMOS PA output using this novel neural-network model exhibit a good agreement between the RVTDNN behavioral model's predicted results and measured ones along with a good generality. Moreover, dynamic AM/AM and AM/PM characteristics obtained using the proposed model demonstrated that the RVTDNN can track and account for the memory effects of the PAs well. These characteristics also point out that the small-signal response of the LDMOS PA is more affected by the memory effects than the PAs large-signal response when it is driven by 3G signals. This RVTDNN model requires a significantly reduced complexity and shorter processing time in the analysis and training procedures, when driven with complex modulated and highly varying envelope signals such as 3G signals, than previously published neural-network-based PA models.",
"title": ""
},
{
"docid": "1f355bd6b46e16c025ba72aa9250c61d",
"text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.",
"title": ""
},
{
"docid": "278ec426c504828f1f13e1cf1ce50e39",
"text": "Information retrieval, IR, is the science of extracting information from documents. It can be viewed in a number of ways: logical, probabilistic and vector space models are some of the most important. In this book, the author, one of the leading researchers in the area, shows how these three views can be combined in one mathematical framework, the very one used to formulate the general principles of quantum mechanics. Using this framework, van Rijsbergen presents a new theory for the foundations of IR, in particular a new theory of measurement. He shows how a document can be represented as a vector in Hilbert space, and the document’s relevance by an Hermitian operator. All the usual quantum-mechanical notions, such as uncertainty, superposition and observable, have their IR-theoretic analogues. But the approach is more than just analogy: the standard theorems can be applied to address problems in IR, such as pseudo-relevance feedback, relevance feedback and ostensive retrieval. The relation with quantum computing is also examined. To help keep the book self-contained, appendices with background material on physics and mathematics are included, and each chapter ends with some suggestions for further reading. This is an important book for all those working in IR, AI and natural language processing.",
"title": ""
},
{
"docid": "268e8d3b755d7579c2cbdee466622270",
"text": "This research is an attempt to illustrate the variables that are mentioned in the literature to deal with the unexpected future risks that are increasingly threatening the success of the large program. The research is a qualitative conceptualization using secondary data collection from the literature review and by criticizing it reaching a structural validation of the system dynamic simple model of how to increase the level of the stock of the unknown unknowns or the complexity chaotic knowledge for better risk management and creativity in achieving a competitive edge. The unknow-unknowns are still representing a black box and are under the control of the god act. This is a try only to concurrent and foreword adaptation with the unknown future. The manager can use this model to conceptualize the internal and external variables that can be linked to the business being objectives. By using this model the manager can minimized the side effects of the productivity and efficiency",
"title": ""
},
{
"docid": "d733f07d3b022ad8a7020c05292bcddd",
"text": "In Chapter 9 we discussed quality management models with examples of in-process metrics and reports. The models cover both the front-end design and coding activities and the back-end testing phases of development. The focus of the in-process data and reports, however, are geared toward the design review and code inspection data, although testing data is included. This chapter provides a more detailed discussion of the in-process metrics from the testing perspective. 1 These metrics have been used in the IBM Rochester software development laboratory for some years with continual evolution and improvement, so there is ample implementation experience with them. This is important because although there are numerous metrics for software testing, and new ones being proposed frequently, relatively few are supported by sufficient experiences of industry implementation to demonstrate their usefulness. For each metric, we discuss its purpose, data, interpretation , and use, and provide a graphic example based on real-life data. Then we discuss in-process quality management vis-à-vis these metrics and revisit the metrics 271 1. This chapter is a modified version of a white paper written for the IBM corporate-wide Software Test Community Leaders (STCL) group, which was published as \" In-process Metrics for Software Testing, \" in",
"title": ""
},
{
"docid": "36874bcbbea1563542265cf2c6261ede",
"text": "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "888e8f68486c08ffe538c46ba76de85c",
"text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.",
"title": ""
},
{
"docid": "06f6ffa9c1c82570b564e1cd0f719950",
"text": "Widespread use of biometric architectures implies the need to secure highly sensitive data to respect the privacy rights of the users. In this paper, we discuss the following question: To what extent can biometric designs be characterized as Privacy Enhancing Technologies? The terms of privacy and security for biometric schemes are defined, while current regulations for the protection of biometric information are presented. Additionally, we analyze and compare cryptographic techniques for secure biometric designs. Finally, we introduce a privacy-preserving approach for biometric authentication in mobile electronic financial applications. Our model utilizes the mechanism of pseudonymous biometric identities for secure user registration and authentication. We discuss how the privacy requirements for the processing of biometric data can be met in our scenario. This work attempts to contribute to the development of privacy-by-design biometric technologies.",
"title": ""
},
{
"docid": "d245fbc12d9a7d36751e3b75d9eb0e62",
"text": "What makes for an explanation of \"black box\" AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas. This set the stage for our empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of \"naturalistic explanation\" of computational systems, computational cognitive modeling, and the development of measures for performance evaluation. The purpose of our work is to contribute to the program of research on “Explainable AI.” In this report we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems. INTRODUCTION The importance of explanation in AI has been emphasized in the popular press, with considerable discussion of the explainability of Deep Nets and Machine Learning systems (e.g., Kuang, 2017). For such “black box” systems, there is a need to explain how they work so that users and decision makers can develop appropriate trust and reliance. As an example, referencing Figure 1, a Deep Net that we created was trained to recognize types of tools. Figure 1. Some examples of Deep Net classification. Outlining the axe and overlaying bird silhouettes on it resulted in a confident misclassification. While a fuzzy hammer is correctly classified, an embossed rendering is classified as a saw. Deep Nets can classify with high hit rates for images that fall within the variation of their training sets, but are nonetheless easily spoofed using instances that humans find easy to classify. Furthermore, Deep Nets have to provide some classification for an input. Thus, a Volkswagen might be classified as a tulip by a Deep Net trained to recognize types of flowers. So, if Deep Nets do not actually possess human-semantic concepts (e.g., that axes have things that humans call \"blades\"), what do the Deep Nets actually \"see\"? And more directly, how can users be enabled to develop appropriate trust and reliance on these AI systems? Articles in the popular press highlight the successes of Deep Nets (e.g., the discovery of planetary systems in Hubble Telescope data; Temming 2018), and promise diverse applications \"... the recognition of faces, handwriting, speech... navigation and control of autonomous vehicles... it seems that neural networks are being used everywhere\" (Lucky, 2018, p. 24). And yet \"models are more complex and less interpretable than ever... Justifying [their] decisions will only become more crucial\" (Biran and Cotton, 2017, p. 4). Indeed, a proposed regulation before the European Union (Goodman and Flaxman, 2016) asserts that users have the \"right to an explanation.” What form must an explanation for Deep Nets take? This is a challenge in the DARPA \"Explainable AI\" (XAI) Program: To develop AI systems that can engage users in a process in which the mechanisms and \"decisions\" of the AI are explained. Our tasks on the Program are to: (1). Integrate philosophical studies and psychological research in order to identify consensus points, key concepts and key variables of explanatory reasoning, (2). Develop and validate measures of explanation goodness, explanation satisfaction, mental models and human-XAI performance, (3) Develop and evaluate a computational model of how people understand computational devices, and C op yr ig ht 2 01 8 by H um an F ac to rs a nd E rg on om ic s So ci et y. D O I 1 0. 11 77 /1 54 19 31 21 86 21 04 7 Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 197",
"title": ""
},
{
"docid": "87e2d691570403ae36e0a9a87099ad71",
"text": "Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. Nowadays, however, opera is frequently performed in the original language with surtitles in the target language projected on to the stage. Furthermore, electronic librettos placed on the back of each seat containing translations are now becoming widely available. However, to date most research in audiovisual translation has been dedicated to the field of screen translation, which, while being both audiovisual and multimedial in nature, is specifically understood to refer to the translation of films and other products for cinema, TV, video and DVD. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. Television screens, computer screens and a series of devices such as DVD players, video game consoles, GPS navigation devices and mobile phones are also able to send out audiovisual products to be translated into scores of languages. Hence, strictly speaking, screen translation includes translations for any electronic appliance with a screen; however, for the purposes of this chapter, the term will be used mainly to refer to translations for the most popular products, namely for cinema, TV, video and DVD, and videogames. The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling.1 Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the",
"title": ""
},
{
"docid": "f8dd52b08b71042b49be42dd46ca44e3",
"text": "Friendships are dynamic. Previous studies have converged to suggest that social interactions, in both online and offline social networks, are diagnostic reflections of friendship relations (also called social ties). However, most existing approaches consider a social tie as either a binary relation, or a fixed value (named tie strength). In this paper, we investigate the dynamics of dyadic friend relationships through online social interactions, in terms of a variety of aspects, such as reciprocity, temporality, and contextuality. In turn, we propose a model to predict repliers and retweeters given a particular tweet posted at a certain time in a microblog-based social network. More specifically, we have devised a learning-to-rank approach to train a ranker that considers elaborate user-level and tweet-level features (like sentiment, self-disclosure, and responsiveness) to address these dynamics. In the prediction phase, a tweet posted by a user is deemed a query and the predicted repliers/retweeters are retrieved using the learned ranker. We have collected a large dataset containing 73.3 million dyadic relationships with their interactions (replies and retweets). Extensive experimental results based on this dataset show that by incorporating the dynamics of friendship relations, our approach significantly outperforms state-of-the-art models in terms of multiple evaluation metrics, such as MAP, NDCG and Topmost Accuracy. In particular, the advantage of our model is even more promising in predicting the exact sequence of repliers/retweeters considering their orders. Furthermore, the proposed approach provides emerging implications for many high-value applications in online social networks.",
"title": ""
},
{
"docid": "01f8616cafa72c473e33f149faff044a",
"text": "We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our experience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We conclude with a set of challenges.",
"title": ""
},
{
"docid": "d0b287d0bd41dedbbfa3357653389e9c",
"text": "Credit scoring model have been developed by banks and researchers to improve the process of assessing credit worthiness during the credit evaluation process. The objective of credit scoring models is to assign credit risk to either a ‘‘good risk’’ group that is likely to repay financial obligation or a ‘‘bad risk’’ group who has high possibility of defaulting on the financial obligation. Construction of credit scoring models requires data mining techniques. Using historical data on payments, demographic characteristics and statistical techniques, credit scoring models can help identify the important demographic characteristics related to credit risk and provide a score for each customer. This paper illustrates using data mining to improve assessment of credit worthiness using credit scoring models. Due to privacy concerns and unavailability of real financial data from banks this study applies the credit scoring techniques using data of payment history of members from a recreational club. The club has been facing a problem of rising number in defaulters in their monthly club subscription payments. The management would like to have a model which they can deploy to identify potential defaulters. The classification performance of credit scorecard model, logistic regression model and decision tree model were compared. The classification error rates for credit scorecard model, logistic regression and decision tree were 27.9%, 28.8% and 28.1%, respectively. Although no model outperforms the other, scorecards are relatively much easier to deploy in practical applications. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8920b9fbfe010af17e664c0b62c8e0a2",
"text": "The field of machine learning is an interesting and relatively new area of research in artificial intelligence. In this paper, a special type of reinforcement learning, Q-Learning, was applied to the popular mobile game Flappy Bird. The QLearning algorithm was tested on two different environments. The original version and a simplified version. The maximum score achieved on the original version and simplified version were 169 and 28,851, respectively. The trade-off between runtime and accuracy was investigated. Using appropriate settings, the Q-Learning algorithm was proven to be successful with a relatively quick convergence time.",
"title": ""
},
{
"docid": "20b00a2cc472dfec851f4aea42578a9e",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "39fa66b86ca91c54a2d2020f04ecc7ba",
"text": "We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"title": ""
},
{
"docid": "3d53c601d921ea849f6ea3a3a194dae7",
"text": "The popularity of caffeine as a psychoactive drug is due to its stimulant properties, which depend on its ability to reduce adenosine transmission in the brain. Adenosine A1 and A2A receptors are expressed in the basal ganglia, a group of structures involved in various aspects of motor control. Caffeine acts as an antagonist to both types of receptors. Increasing evidence indicates that the psychomotor stimulant effect of caffeine is generated by affecting a particular group of projection neurons located in the striatum, the main receiving area of the basal ganglia. These cells express high levels of adenosine A2A receptors, which are involved in various intracellular processes, including the expression of immediate early genes and regulation of the dopamine- and cyclic AMP-regulated 32-kDa phosphoprotein DARPP-32. The present review focuses on the effects of caffeine on striatal signal transduction and on their involvement in caffeine-mediated motor stimulation.",
"title": ""
},
{
"docid": "c302699cb7dec9f813117bfe62d3b5fb",
"text": "Pipe networks constitute the means of transporting fluids widely used nowadays. Increasing the operational reliability of these systems is crucial to minimize the risk of leaks, which can cause serious pollution problems to the environment and have disastrous consequences if the leak occurs near residential areas. Considering the importance in developing efficient systems for detecting leaks in pipelines, this work aims to detect the characteristic frequencies (predominant) in case of leakage and no leakage. The methodology consisted of capturing the experimental data through a microphone installed inside the pipeline and coupled to a data acquisition card and a computer. The Fast Fourier Transform (FFT) was used as the mathematical approach to the signal analysis from the microphone, generating a frequency response (spectrum) which reveals the characteristic frequencies for each operating situation. The tests were carried out using distinct sizes of leaks, situations without leaks and cases with blows in the pipe caused by metal instruments. From the leakage tests, characteristic peaks were found in the FFT frequency spectrum using the signal generated by the microphone. Such peaks were not observed in situations with no leaks. Therewith, it was realized that it was possible to distinguish, through spectral analysis, an event of leakage from an event without leakage.",
"title": ""
}
] |
scidocsrr
|
4d9a080d7303d0cccc6378aca2e3dcbf
|
NormAD - Normalized Approximate Descent based supervised learning rule for spiking neurons
|
[
{
"docid": "72839a67032eba63246dd2bdf5799f75",
"text": "We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning.",
"title": ""
},
{
"docid": "af3a87d82c1f11a8a111ed4276020161",
"text": "In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.",
"title": ""
}
] |
[
{
"docid": "7784c8c4592acd98f47481fd417d8837",
"text": "Mastering Structured Data on the Semantic Web explains the practical aspects and the theory behind the Semantic Web and how structured data, such as HTML5 Microdata and JSON-LD annotations, can be used to improve your site’s performance on next-generation Search Engine Result Pages and be displayed on Google Knowledge Panels. You will learn how to represent data in a machineinterpretable form, using the Resource Description Framework (RDF), the cornerstone of the Semantic Web. You will see how to store and manipulate RDF data in ways that benefit Big Data applications, such as the Google Knowledge Graph, Wikidata, or Facebook’s Social Graph. The book also covers the most important tools for manipulating RDF data, including, but not limited to, Protégé, TopBraid Composer, Sindice, Apache Marmotta, Callimachus, and Tabulator. You will learn to use the Apache Jena and Sesame APIs for rapid Semantic Web application development. Mastering Structured Data on the Semantic Web demonstrates how to create and interlink five-star Linked Open Data to reach a wider audience, encourage data reuse, and provide content that can be automatically processed with full certainty. The book is for web developers and search engine optimization (SEO) experts who want to learn state-of-the-art SEO methods. The book will also benefit researchers interested in automatic knowledge discovery.",
"title": ""
},
{
"docid": "32ef354fff832d438e02ce5800f0909f",
"text": "In this fast life, everyone is in hurry to reach their destinations. In this case waiting for the buses is not reliable. People who rely on the public transport their major concern is to know the real time location of the bus for which they are waiting for and the time it will take to reach their bus stop. This information helps people in making better travelling decisions. This paper gives the major challenges in the public transport system and discuses various approaches to intelligently manage it. Current position of the bus is acquired by integrating GPS device on the bus and coordinates of the bus are sent by either GPRS service provided by GSM networks or SMS or RFID. GPS device is enabled on the tracking device and this information is sent to centralized control unit or directly at the bus stops using RF receivers. This system is further integrated with the historical average speeds of each segment. This is done to improve the accuracy by including the factors like volume of traffic, crossings in each segment, day and time of day. People can track information using LEDs at bus stops, SMS, web application or Android application. GPS coordinates of the bus when sent to the centralized server where various arrival time estimation algorithms are applied using historical speed patterns.",
"title": ""
},
{
"docid": "c91578cf52a01e23bd8229d02d2d9a07",
"text": "This paper explores the effectiveness of machine learning techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. To this end, a number of experiments have been conducted using representative learning algorithms, which were trained using a data set of 164 fraud and non-fraud Greek firms in the recent period 2001-2002. The decision of which particular method to choose is a complicated problem. A good alternative to choosing only one method is to create a hybrid forecasting system incorporating a number of possible solution methods as components (an ensemble of classifiers). For this purpose, we have implemented a hybrid decision support system that combines the representative algorithms using a stacking variant methodology and achieves better performance than any examined simple and ensemble method. To sum up, this study indicates that the investigation of financial information can be used in the identification of FFS and underline the importance of financial ratios. Keywords—Machine learning, stacking, classifier.",
"title": ""
},
{
"docid": "1cbac59380ee798a621d58a6de35361f",
"text": "With the fast development of modern power semiconductors in the last years, the development of current measurement technologies has to adapt to this evolution. The challenge for the power electronic engineer is to provide a current sensor with a high bandwidth and a high immunity against external interferences. Rogowski current transducers are popular for monitoring transient currents in power electronic applications without interferences caused by external magnetic fields. But the trend of even higher current and voltage gradients generates a dilemma regarding the Rogowski current transducer technology. On the one hand, a high current gradient requires a current sensor with a high bandwidth. On the other hand, high voltage gradients forces to use a shielding around the Rogowski coil in order to protect the measurement signal from a capacitive displacement current caused by an unavoidable capacitive coupling to the setup, which reduces the bandwidth substantially. This paper presents a new Rogowski coil design which allows to measure high current gradients close to high voltage gradients without interferences and without reducing the bandwidth by a shielding. With this new measurement technique, it is possible to solve the mentioned dilemma and to get ready to measure the current of modern power semiconductors such as SiC and GaN with a Rogowski current transducer.",
"title": ""
},
{
"docid": "15731cee350b1934f2e9ef9fd218a478",
"text": "In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster in terms of computation time. Prediction performance is worse than the state-of-the-art Weisfeiler Lehman RDF kernel, but our kernel is a factor 10 faster to compute. Thus, we consider this kernel a very suitable baseline for learning from RDF data. Furthermore, we extend this kernel to handle RDF literals as bag-ofwords feature vectors, which increases performance in two of the four experiments.",
"title": ""
},
{
"docid": "6ac996c20f036308f36c7b667babe876",
"text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.",
"title": ""
},
{
"docid": "3a18b210d3e9f0f0cf883953b8fdd242",
"text": "Short-term traffic forecasting is becoming more important in intelligent transportation systems. The k-nearest neighbours (kNN) method is widely used for short-term traffic forecasting. However, the self-adjustment of kNN parameters has been a problem due to dynamic traffic characteristics. This paper proposes a fully automatic dynamic procedure kNN (DP-kNN) that makes the kNN parameters self-adjustable and robust without predefined models or training for the parameters. A real-world dataset with more than one year traffic records is used to conduct experiments. The results show that DP-kNN can perform better than manually adjusted kNN and other benchmarking methods in terms of accuracy on average. This study also discusses the difference between holiday and workday traffic prediction as well as the usage of neighbour distance measurement.",
"title": ""
},
{
"docid": "089273886a4d7ea591a7d631042be92b",
"text": "Student learning and academic performance hinge largely on frequency of class attendance and participation. The fingerprint recognition system aims at providing an accurate and efficient attendance management service to staff and students within an existing portal system. The integration of a unique and accurate identification system into the existing portal system offers at least, two advantages: accurate and efficient analysis and reporting of student attendance on a continuous basis; and also facilitating the provision of personalised services, enhancing user experience altogether. An integrated portal system was developed to automate attendance management and tested for fifty students in five attempts. The 98% accuracy achieved by the system points to the feasibility of large scale deployment and interoperability of multiple devices using existing technology infrastructure.",
"title": ""
},
{
"docid": "18c56a11d0a2430f1e2c10ee0bf84c7d",
"text": "In this paper we consider a notion of pointwise Kan extension in double categories that naturally generalises Dubuc’s notion of pointwise Kan extension along enriched functors. We show that, when considered in equipments that admit opcartesian tabulations, it generalises Street’s notion of pointwise Kan extension in 2-categories. Introduction A useful construction in classical category theory is that of right Kan extension along functors and, dually, that of left Kan extension along functors. Many important notions, including that of limit and right adjoint functor, can be regarded as right Kan extensions. On the other hand right Kan extensions can often be constructed out of limits; such Kan extensions are called pointwise. It is this notion of Kan extension that was extended to more general settings, firstly by Dubuc in [Dub70], to a notion of pointwise Kan extension along V-functors, between categories enriched in some suitable category V , and later by Street in [Str74], to a notion of pointwise Kan extension along morphisms in any 2-category. It is unfortunate that Street’s notion, when considered in the 2-category V-Cat of V-enriched categories, does not agree with Dubuc’s notion of pointwise Kan extension, but is stronger in general. In this paper we show that by moving from 2-categories to double categories it is possible to unify Dubuc’s and Street’s notion of pointwise Kan extension. In §1 we recall the notion of double category, which generalises that of 2-category by considering, instead of a single type, two types of morphism. For example one can consider both ring homomorphisms and bimodules between rings. One type of morphism is drawn vertically and the other horizontally so that cells in a double category, which have both a horizontal and vertical morphism as source and as target, are shaped like squares. Every double category K contains a 2-category V (K) consisting of the objects and vertical morphisms of K, as well as cells whose horizontal source and target are identities. Many of the results in this paper first appeared as part of my PhD thesis “Algebraic weighted colimits” that was written under the guidance of Simon Willerton. I would like to thank Simon for his advice and encouragement. Also I thank the anonymous referee for helpful suggestions, and the University of Sheffield for its financial support of my PhD studies. Received by the editors 2014-02-05 and, in revised form, 2014-11-03. Transmitted by R. Paré. Published on 2014-11-06. 2010 Mathematics Subject Classification: 18D05, 18A40, 18D20.",
"title": ""
},
{
"docid": "f2d8ee741a61b1f950508ac57b2aa379",
"text": "The concentrations of cellulose chemical markers, in oil, are influenced by various parameters due to the partition between the oil and the cellulose insulation. One major parameter is the oil temperature which is a function of the transformer load, ambient temperature and the type of cooling. To accurately follow the chemical markers concentration trends during all the transformer life, it is crucial to normalize the concentrations at a specific temperature. In this paper, we propose equations for the normalization of methanol, ethanol and 2-furfural at 20 °C. The proposed equations have been validated on some real power transformers.",
"title": ""
},
{
"docid": "0c90537f2b470354c2328c567e053ee2",
"text": "BACKGROUND\nCombination antiplatelet therapy with clopidogrel and aspirin may reduce the rate of recurrent stroke during the first 3 months after a minor ischemic stroke or transient ischemic attack (TIA). A trial of combination antiplatelet therapy in a Chinese population has shown a reduction in the risk of recurrent stroke. We tested this combination in an international population.\n\n\nMETHODS\nIn a randomized trial, we assigned patients with minor ischemic stroke or high-risk TIA to receive either clopidogrel at a loading dose of 600 mg on day 1, followed by 75 mg per day, plus aspirin (at a dose of 50 to 325 mg per day) or the same range of doses of aspirin alone. The dose of aspirin in each group was selected by the site investigator. The primary efficacy outcome in a time-to-event analysis was the risk of a composite of major ischemic events, which was defined as ischemic stroke, myocardial infarction, or death from an ischemic vascular event, at 90 days.\n\n\nRESULTS\nA total of 4881 patients were enrolled at 269 international sites. The trial was halted after 84% of the anticipated number of patients had been enrolled because the data and safety monitoring board had determined that the combination of clopidogrel and aspirin was associated with both a lower risk of major ischemic events and a higher risk of major hemorrhage than aspirin alone at 90 days. Major ischemic events occurred in 121 of 2432 patients (5.0%) receiving clopidogrel plus aspirin and in 160 of 2449 patients (6.5%) receiving aspirin plus placebo (hazard ratio, 0.75; 95% confidence interval [CI], 0.59 to 0.95; P=0.02), with most events occurring during the first week after the initial event. Major hemorrhage occurred in 23 patients (0.9%) receiving clopidogrel plus aspirin and in 10 patients (0.4%) receiving aspirin plus placebo (hazard ratio, 2.32; 95% CI, 1.10 to 4.87; P=0.02).\n\n\nCONCLUSIONS\nIn patients with minor ischemic stroke or high-risk TIA, those who received a combination of clopidogrel and aspirin had a lower risk of major ischemic events but a higher risk of major hemorrhage at 90 days than those who received aspirin alone. (Funded by the National Institute of Neurological Disorders and Stroke; POINT ClinicalTrials.gov number, NCT00991029 .).",
"title": ""
},
{
"docid": "39971848bd1020694676b530b3e6540b",
"text": "This paper presents an unequal Wilkinson power divider operating at arbitrary dual band without reactive components (such as inductors and capacitors). To satisfy the unequal characteristic, a novel structure is proposed with two groups of transmission lines and two parallel stubs. Closed-form equations containing all parameters of this structure are derived based on circuit theory and transmission line theory. For verification, two groups of experimental results including open and short stubs are presented. It can be found that all the analytical features of this unequal power divider can be fulfilled at arbitrary dual band simultaneously.",
"title": ""
},
{
"docid": "13d1b0637c12d617702b4f80fd7874ef",
"text": "Linear-time algorithms for testing the planarity of a graph are well known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a conceptually simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. This implies a linear-time planarity test. Our approach is radically different from all previous linear-time planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise.",
"title": ""
},
{
"docid": "37a6d22411148cde4be4cb5a4dfe8bde",
"text": "When you write papers, how many times do you want to make some citations at a place but you are not sure which papers to cite? Do you wish to have a recommendation system which can recommend a small number of good candidates for every place that you want to make some citations? In this paper, we present our initiative of building a context-aware citation recommendation system. High quality citation recommendation is challenging: not only should the citations recommended be relevant to the paper under composition, but also should match the local contexts of the places citations are made. Moreover, it is far from trivial to model how the topic of the whole paper and the contexts of the citation places should affect the selection and ranking of citations. To tackle the problem, we develop a context-aware approach. The core idea is to design a novel non-parametric probabilistic model which can measure the context-based relevance between a citation context and a document. Our approach can recommend citations for a context effectively. Moreover, it can recommend a set of citations for a paper with high quality. We implement a prototype system in CiteSeerX. An extensive empirical evaluation in the CiteSeerX digital library against many baselines demonstrates the effectiveness and the scalability of our approach.",
"title": ""
},
{
"docid": "d7e2d2d3d25d7c4d09e348b93be23011",
"text": "Bandit based methods for tree search have recently gained popularity when applied to huge trees, e.g. in the game of go [6]. Their efficient exploration of the tree enables to return rapidly a good value, and improve precision if more time is provided. The UCT algorithm [8], a tree search method based on Upper Confidence Bounds (UCB) [2], is believed to adapt locally to the effective smoothness of the tree. However, we show that UCT is “over-optimistic” in some sense, leading to a worst-case regret that may be very poor. We propose alternative bandit algorithms for tree search. First, a modification of UCT using a confidence sequence that scales exponentially in the horizon depth is analyzed. We then consider Flat-UCB performed on the leaves and provide a finite regret bound with high probability. Then, we introduce and analyze a Bandit Algorithm for Smooth Trees (BAST) which takes into account actual smoothness of the rewards for performing efficient “cuts” of sub-optimal branches with high confidence. Finally, we present an incremental tree expansion which applies when the full tree is too big (possibly infinite) to be entirely represented and show that with high probability, only the optimal branches are indefinitely developed. We illustrate these methods on a global optimization problem of a continuous function, given noisy values.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "a48d3b21d1d1e3e7e069a46aa17df7ef",
"text": "The linear step-up multiple testing procedure controls the False Discovery Rate (FDR) at the desired level q for independent and positively dependent test statistics. When all null hypotheses are true, and the test statistics are independent and continuous, the bound is sharp. When some of the null hypotheses are not true, the procedure is conservative by a factor which is the proportion m0/m of the true null hypotheses among the hypotheses. We provide a new two-stage procedure in which the linear step-up procedure is used in stage one to estimate m0, providing a new level q′ which is used in the linear step-up procedure in the second stage. We prove that a general form of the two-stage procedure controls the FDR at the desired level q. This framework enables us to study analytically the properties of other procedures that exist in the literature. A simulation study is presented that shows that two-stage adaptive procedures improve in power over the original procedure, mainly because they provide tighter control of the FDR. We further study the performance of the current suggestions, some variations of the procedures, and previous suggestions, in the case where the test statistics are positively dependent, a case for which the original procedure controls",
"title": ""
},
{
"docid": "9fdecc8854f539ddf7061c304616130b",
"text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.",
"title": ""
},
{
"docid": "c09391a25defcb797a7c8da3f429fafa",
"text": "BACKGROUND\nTo examine the postulated relationship between Ambulatory Care Sensitive Conditions (ACSC) and Primary Health Care (PHC) in the US context for the European context, in order to develop an ACSC list as markers of PHC effectiveness and to specify which PHC activities are primarily responsible for reducing hospitalization rates.\n\n\nMETHODS\nTo apply the criteria proposed by Solberg and Weissman to obtain a list of codes of ACSC and to consider the PHC intervention according to a panel of experts. Five selection criteria: i) existence of prior studies; ii) hospitalization rate at least 1/10,000 or 'risky health problem'; iii) clarity in definition and coding; iv) potentially avoidable hospitalization through PHC; v) hospitalization necessary when health problem occurs. Fulfilment of all criteria was required for developing the final ACSC list. A sample of 248,050 discharges corresponding to 2,248,976 inhabitants of Catalonia in 1996 provided hospitalization rate data. A Delphi survey was performed with a group of 44 experts reviewing 113 ICD diagnostic codes (International Classification of Diseases, 9th Revision, Clinical Modification), previously considered to be ACSC.\n\n\nRESULTS\nThe five criteria selected 61 ICD as a core list of ACSC codes and 90 ICD for an expanded list.\n\n\nCONCLUSIONS\nA core list of ACSC as markers of PHC effectiveness identifies health conditions amenable to specific aspects of PHC and minimizes the limitations attributable to variations in hospital admission policies. An expanded list should be useful to evaluate global PHC performance and to analyse market responsibility for ACSC by PHC and Specialist Care.",
"title": ""
}
] |
scidocsrr
|
5b7f27f252bd3910c0dec46571d7f911
|
Medical diagnostic expert system based on PDP model
|
[
{
"docid": "b7597e1f8c8ae4b40f5d7d1fe1f76a38",
"text": "In this paper we present a Time-Delay Neural Network (TDNN) approach to phoneme recognition which is characterized by two important properties. 1) Using a 3 layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces. The TDNN learns these decision surfaces automatically using error backpropagation 111. 2) The time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independent of position in time and hence not blurred by temporal shifts",
"title": ""
}
] |
[
{
"docid": "011ff2d5995a46a686d9edb80f33b8ca",
"text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.",
"title": ""
},
{
"docid": "487c011cb0701b4b909dedca2d128fe6",
"text": "It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods. The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ .",
"title": ""
},
{
"docid": "ba417e76c8d41cd2e2dae78242682492",
"text": "The N400 ERP component is widely used in research on language and semantic memory. Although the component's relation to semantic processing is well-established, the computational mechanisms underlying N400 generation are currently unclear (Kutas & Federmeier, 2011). We explored the mechanisms underlying the N400 by examining how a connectionist model's performance measures covary with N400 amplitudes. We simulated seven N400 effects obtained in human empirical research. Network error was consistently in the same direction as N400 amplitudes, namely larger for low frequency words, larger for words with many features, larger for words with many orthographic neighbors, and smaller for semantically related target words as well as repeated words. Furthermore, the repetition-induced decrease was stronger for low frequency words, and for words with many semantic features. In contrast, semantic activation corresponded less well with the N400. Our results suggest an interesting relation between N400 amplitudes and semantic network error. In psychological terms, error values in connectionist models have been conceptualized as implicit prediction error, and we interpret our results as support for the idea that N400 amplitudes reflect implicit prediction error in semantic memory (McClelland, 1994).",
"title": ""
},
{
"docid": "7ca863355d1fb9e4954c360c810ece53",
"text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.",
"title": ""
},
{
"docid": "a34010e8e8bb09889ed771c6a0493aa4",
"text": "In a wide range of applications, audio amplifiers require a large Power Supply Rejection Ratio (PSRR) that the current Class-D architecture cannot reach. This paper proposes a self-adjusting internal voltage reference scheme that sets the bias voltages of the amplifier without losing on output dynamics. This solution relaxes the constraints on gain and feedback resistors matching that were previously the limiting factor for the PSRR. Theory of operation, design and IC evaluation in a Class-D amplifier in CMOS 0.25µm will be shown in this paper. The use of this voltage reference increased the amplifier's PSRR by 15dB, with only a 140µA increase in current consumption.",
"title": ""
},
{
"docid": "04edf5059bcaf3ed361ed65b8897ba8d",
"text": "The flying-capacitor (FC) topology is one of the more well-established ideas of multilevel conversion, typically applied as an inverter. One of the biggest advantages of the FC converter is the ability to naturally balance capacitor voltage. When natural balancing occurs neither measurements, nor additional control is needed to maintain required capacitors voltage sharing. However, in order to achieve natural voltage balancing suitable conditions must be achieved such as the topology, number of levels, modulation strategy as well as impedance of the output circuitry. Nevertheless this method is effectively applied in various classes of the converter such as inverters, multicell DC-DC, switch-mode DC-DC, AC-AC, as well as rectifiers. The next important issue related to the natural balancing process is its dynamics. Furthermore, in order to reinforce the balancing mechanism an auxiliary resonant balancing circuit is utilized in the converter which can also be critical in the AC-AC converters or switch mode DC-DC converters. This paper also presents an issue of choosing modulation strategy for the FC converter due to the fact that the natural balancing process is well-established for phase shifted PWM whilst other types of modulation can be more favorable for the power quality.",
"title": ""
},
{
"docid": "beb1c8ba8809d1ac409584bea1495654",
"text": "Multimodal information processing has received considerable attention in recent years. The focus of existing research in this area has been predominantly on the use of fusion technology. In this paper, we suggest that cross-modal association can provide a new set of powerful solutions in this area. We investigate different cross-modal association methods using the linear correlation model. We also introduce a novel method for cross-modal association called Cross-modal Factor Analysis (CFA). Our earlier work on Latent Semantic Indexing (LSI) is extended for applications that use off-line supervised training. As a promising research direction and practical application of cross-modal association, cross-modal information retrieval where queries from one modality are used to search for content in another modality using low-level features is then discussed in detail. Different association methods are tested and compared using the proposed cross-modal retrieval system. All these methods achieve significant dimensionality reduction. Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads. The CFA method achieves 91.1% detection accuracy, while LSI and Canonical Correlation Analysis (CCA) achieve 66.1% and 73.9% accuracy, respectively. As shown by experiments, cross-modal association provides many useful benefits, such as robust noise resistance and effective feature selection. Compared to CCA and LSI, the proposed CFA shows several advantages in analysis performance and feature usage. Its capability in feature selection and noise resistance also makes CFA a promising tool for many multimedia analysis applications.",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "7b347abe744b19215cf7a50ebd1b7f89",
"text": "The thickness of the cerebral cortex was measured in 106 non-demented participants ranging in age from 18 to 93 years. For each participant, multiple acquisitions of structural T1-weighted magnetic resonance imaging (MRI) scans were averaged to yield high-resolution, high-contrast data sets. Cortical thickness was estimated as the distance between the gray/white boundary and the outer cortical surface, resulting in a continuous estimate across the cortical mantle. Global thinning was apparent by middle age. Men and women showed a similar degree of global thinning, and did not differ in mean thickness in the younger or older groups. Age-associated differences were widespread but demonstrated a patchwork of regional atrophy and sparing. Examination of subsets of the data from independent samples produced highly similar age-associated patterns of atrophy, suggesting that the specific anatomic patterns within the maps were reliable. Certain results, including prominent atrophy of prefrontal cortex and relative sparing of temporal and parahippocampal cortex, converged with previous findings. Other results were unexpected, such as the finding of prominent atrophy in frontal cortex near primary motor cortex and calcarine cortex near primary visual cortex. These findings demonstrate that cortical thinning occurs by middle age and spans widespread cortical regions that include primary as well as association cortex.",
"title": ""
},
{
"docid": "e45fdef4d919044f88353a71361a4dd6",
"text": "Today, the Internet security community largely emphasizes cyberspace monitoring for the purpose of generating cyber intelligence. In this paper, we present a survey on darknet. The latter is an effective approach to observe Internet activities and cyber attacks via passive monitoring. We primarily define and characterize darknet and indicate its alternative names. We further list other trap-based monitoring systems and compare them to darknet. Moreover, in order to provide realistic measures and analysis of darknet information, we report case studies, namely, Conficker worm in 2008 and 2009, Sality SIP scan botnet in 2011, and the largest amplification attack in 2014. Finally, we provide a taxonomy in relation to darknet technologies and identify research gaps that are related to three main darknet categories: deployment, traffic analysis, and visualization. Darknet projects are found to monitor various cyber threat activities and are distributed in one third of the global Internet. We further identify that Honeyd is probably the most practical tool to implement darknet sensors, and future deployment of darknet will include mobile-based VOIP technology. In addition, as far as darknet analysis is considered, computer worms and scanning activities are found to be the most common threats that can be investigated throughout darknet; Code Red and Slammer/Sapphire are the most analyzed worms. Furthermore, our study uncovers various lacks in darknet research. For instance, less than 1% of the contributions tackled distributed reflection denial of service (DRDoS) amplification investigations, and at most 2% of research works pinpointed spoofing activities. Last but not least, our survey identifies specific darknet areas, such as IPv6 darknet, event monitoring, and game engine visualization methods that require a significantly greater amount of attention from the research community.",
"title": ""
},
{
"docid": "b5d307c368319a5c8473908791c0f62a",
"text": "As the number of people in need of help increases, the degree of compassion people feel for them ironically tends to decrease. This phenomenon is termed the collapse of compassion. Some researchers have suggested that this effect happens because emotions are not triggered by aggregates. We provide evidence for an alternative account. People expect the needs of large groups to be potentially overwhelming, and, as a result, they engage in emotion regulation to prevent themselves from experiencing overwhelming levels of emotion. Because groups are more likely than individuals to elicit emotion regulation, people feel less for groups than for individuals. In Experiment 1, participants displayed the collapse of compassion only when they expected to be asked to donate money to the victims. This suggests that the effect is motivated by self-interest. Experiment 2 showed that the collapse of compassion emerged only for people who were skilled at emotion regulation. In Experiment 3, we manipulated emotion regulation. Participants who were told to down-regulate their emotions showed the collapse of compassion, but participants who were told to experience their emotions did not. We examined the time course of these effects using a dynamic rating to measure affective responses in real time. The time course data suggested that participants regulate emotion toward groups proactively, by preventing themselves from ever experiencing as much emotion toward groups as toward individuals. These findings provide initial evidence that motivated emotion regulation drives insensitivity to mass suffering.",
"title": ""
},
{
"docid": "193aee1131ce05d5d4a4316871c193b8",
"text": "In this paper, we discuss wireless sensor and networking technologies for swarms of inexpensive aquatic surface drones in the context of the HANCAD project. The goal is to enable the swarm to perform maritime tasks such as sea-border patrolling and environmental monitoring, while keeping the cost of each drone low. Communication between drones is essential for the success of the project. Preliminary experiments show that XBee modules are promising for energy efficient multi-hop drone-to-drone communication.",
"title": ""
},
{
"docid": "8f0975954a3767eab03f68884ecb54fa",
"text": "Digital Image processing has become popular and rapidly growing area of application under Computer Science. A basic study of image processing and its application areas are carried out in this paper. Each of these applications may be unique from others. To illustrate the basic concepts of image processing various reviews had done in this paper. The main two applications of digital image processing are discussed below. Firstly pictorial information can be improved for human perception, secondly for autonomous machine perception and efficient storage processing using image data. Digital image can be represented using set of digital values called pixels. Pixel value represent opacities, colors, gray levels, heights etc. Digitization causes a digital image to become an approximation of a real scene. To process an image some operations are applied on image. This paper discusses about the basic aspects of image processing .Image Acquisition means sensing an image .Image Enhancement means improvement in appearance of image .Image Restoration to restore an image .Image Compression to reduce the amount of data of an image to reduce size .This class of technique also include extraction/selection procedures .The importance applications of image processing include Artistic effects ,Bio-medical ,Industrial Inspection ,Geographic Information system ,Law Enforcement, Human Computer interface such as Face Recognition and Gesture recognition.",
"title": ""
},
{
"docid": "e971fd6eac427df9a68f10cad490b2db",
"text": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the 'PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.",
"title": ""
},
{
"docid": "3dbd27e460fd9d3d80967c8215e7cb29",
"text": "Transmission line sag, tension and conductor length varies with the variation of temperature due to thermal expansion and elastic elongation. Beside thermal effect, wind pressure and ice accumulation creates a horizontal and vertical loading on the conductor respectively. Such changes make the calculated data uncertain and require an uncertainty model. A novel affine arithmetic (AA) based transmission line sag, tension and conductor length calculation for parabolic curve is proposed and the proposed method is tested for different test cases. The results are compared with Monte Carlo (MC) and interval arithmetic (IA) methods. The AA based result gives a more conservative bound than MC and IA method in all the cases.",
"title": ""
},
{
"docid": "b9b027c5b511a5528d35cd05d3d57ff4",
"text": "A plasmid is defined as a double stranded, circular DNA molecule capable of autonomous replication. By definition, plasmids do not carry genes essential for the growth of host cells under non-stressed conditions but they have systems which guarantee their autonomous replication also controlling the copy number and ensuring stable inheritance during cell division. Most of the plasmids confer positively selectable phenotypes by the presence of antimicrobial resistance genes. Plasmids evolve as an integral part of the bacterial genome, providing resistance genes that can be easily exchanged among bacteria of different origin and source by conjugation. A multidisciplinary approach is currently applied to study the acquisition and spread of antimicrobial resistance in clinically relevant bacterial pathogens and the established surveillance can be implemented by replicon typing of plasmids. Particular plasmid families are more frequently detected among Enterobacteriaceae and play a major role in the diffusion of specific resistance genes. For instance, IncFII, IncA/C, IncL/M, IncN and IncI1 plasmids carrying extended-spectrum beta-lactamase genes and acquired AmpC genes are currently considered to be \"epidemic resistance plasmids\", being worldwide detected in Enterobacteriaceae of different origin and sources. The recognition of successful plasmids is an essential first step to design intervention strategies preventing their spread.",
"title": ""
},
{
"docid": "83234fe474236c6c1737268643a16e6d",
"text": "Learning to rank method has been proposed for practical application in the field of information retrieval. When employing it in microblog retrieval, the significant interactions of various involved features are rarely considered. In this paper, we propose a Ranking Factorization Machine (Ranking FM) model, which applies Factorization Machine model to microblog ranking on basis of pairwise classification. In this way, our proposed model combines the generality of learning to rank framework with the advantages of factorization models in estimating interactions between features, leading to better retrieval performance. Moreover, three groups of features (content relevance features, semantic expansion features and quality features) and their interactions are utilized in the Ranking FM model with the methods of stochastic gradient descent and adaptive regularization for optimization. Experimental results demonstrate its superiority over several baseline systems on a real Twitter dataset in terms of P@30 and MAP metrics. Furthermore, it outperforms the best performing results in the TREC'12 Real-Time Search Task.",
"title": ""
},
{
"docid": "8207f59dab8704d14874417f6548c0a7",
"text": "The fully-connected layers of deep convolutional neural networks typically contain over 90% of the network parameters. Reducing the number of parameters while preserving predictive performance is critically important for training big models in distributed systems and for deployment in embedded devices. In this paper, we introduce a novel Adaptive Fastfood transform to reparameterize the matrix-vector multiplication of fully connected layers. Reparameterizing a fully connected layer with d inputs and n outputs with the Adaptive Fastfood transform reduces the storage and computational costs costs from O(nd) to O(n) and O(n log d) respectively. Using the Adaptive Fastfood transform in convolutional networks results in what we call a deep fried convnet. These convnets are end-to-end trainable, and enable us to attain substantial reductions in the number of parameters without affecting prediction accuracy on the MNIST and ImageNet datasets.",
"title": ""
},
{
"docid": "a033b1701cc2709dcbf3353e38bc0ac7",
"text": "Requirements reusability within agile development improves software quality and team productivity. One method to implement requirements reusability is traceability, in which relations and dependencies between requirements and artifacts are identified and linked. In this paper, we propose a semiautomated methodology to implement traceability in the agile development process in order to achieve requirements reusability. The main feature of our methodology is the coupling of semi-automated semantic trace generation with the outputs of the agile development process, thus facilitating requirements and artifact reuse. In contrast to previous work, this methodology is specifically designed for practical agile processes and artifacts. Our methodology will be implemented as a component within an existing open source agile tool in order to have minimal impact on the development process. This paper fills a current gap in the area of requirements reusability through traceability and contributes to the limited existing work in agile traceability methodologies.",
"title": ""
},
{
"docid": "b83cd79ce5086124ab7920ab589e61bf",
"text": "Many of today’s most successful video segmentation methods use long-term feature trajectories as their first processing step. Such methods typically use spectral clustering to segment these trajectories, implicitly assuming that motion is translational in image space. In this paper, we explore the idea of explicitly fitting more general motion models in order to classify trajectories as foreground or background. We find that homographies are sufficient to model a wide variety of background motions found in real-world videos. Our simple approach achieves competitive performance on the DAVIS benchmark, while using techniques complementary to state-of-the-art approaches.",
"title": ""
}
] |
scidocsrr
|
9bcd2b79cc3cdb4b6461ca2e25490f33
|
Learning to Represent Words in Context with Multilingual Supervision
|
[
{
"docid": "2917b7b1453f9e6386d8f47129b605fb",
"text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] |
[
{
"docid": "6a8a849bc8272a7b73259e732e3be81b",
"text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.",
"title": ""
},
{
"docid": "0472166a123f56606cd84a65bab89ce4",
"text": "How can we automatically identify the topics of microblog posts? This question has received substantial attention in the research community and has led to the development of different topic models, which are mathematically well-founded statistical models that enable the discovery of topics in document collections. Such models can be used for topic analyses according to the interests of user groups, time, geographical locations, or social behavior patterns. The increasing availability of microblog posts with associated users, textual content, timestamps, geo-locations, and user behaviors, offers an opportunity to study space-time dependent behavioral topics. Such a topic is described by a set of words, the distribution of which varies according to the time, geo-location, and behaviors (that capture how a user interacts with other users by using functionality such as reply or re-tweet) of users. This study jointly models user topic interest and behaviors considering both space and time at a fine granularity. We focus on the modeling of microblog posts like Twitter tweets, where the textual content is short, but where associated information in the form of timestamps, geo-locations, and user interactions is available. The model aims to have applications in location inference, link prediction, online social profiling, etc. We report on experiments with tweets that offer insight into the design properties of the papers proposal.",
"title": ""
},
{
"docid": "8612fc94f5a0c8ba6585307cd6ee721f",
"text": "CONTEXT\nResearch has transformed many areas of medicine, with profound effects on morbidity and mortality. Exciting advances in neuroscience and genomics have transformed research but have not yet been translated to public health impact in psychiatry. Current treatments are necessary but not sufficient for most patients.\n\n\nOBJECTIVES\nTo improve outcomes we will need to (1) identify the neural circuitry of mental disorders, (2) detect the earliest manifestations of risk or illness even before cognition or behavior appear abnormal, (3) personalize care based on individual responses, and (4) implement broader use of effective psychosocial interventions.\n\n\nRESULTS\nTo address these objectives, NIMH, working with its many stakeholders, developed a strategic plan for research. The plan calls for research that will (1) define the pathophysiology of disorders from genes to behavior, (2) map the trajectory of illness to determine when, where, and how to intervene to preempt disability, (3) develop new interventions based on a personalized approach to the diverse needs and circumstances of people with mental illnesses, and (4) strengthen the public health impact of NIMH-supported research by focusing on dissemination science and disparities in care.\n\n\nCONCLUSIONS\nThe NIMH is shifting its funding priorities to close the gap between basic biological knowledge and effective mental health care, paving the way for prevention, recovery, and cure.",
"title": ""
},
{
"docid": "bcacff8549273a0e8cf4993fef4e1b8d",
"text": "This paper describes the first shared task on Taxonomy Extraction Evaluation organised as part of SemEval-2015. Participants were asked to find hypernym-hyponym relations between given terms. For each of the four selected target domains the participants were provided with two lists of domainspecific terms: a WordNet collection of terms and a well-known terminology extracted from an online publicly available taxonomy. A total of 45 taxonomies submitted by 6 participating teams were evaluated using standard structural measures, the structural similarity with a gold standard taxonomy, and through manual quality assessment of sampled novel relations.",
"title": ""
},
{
"docid": "e2c239bed763d13117e943ef988827f1",
"text": "This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "874b14b3c3e15b43de3310327affebaf",
"text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.",
"title": ""
},
{
"docid": "bdaf14f08c40c67bedb4d09ee404774a",
"text": "Many computer vision problems are formulated as the optimization of a cost function. This approach faces two main challenges: (1) designing a cost function with a local optimum at an acceptable solution, and (2) developing an efficient numerical method to search for one (or multiple) of these local optima. While designing such functions is feasible in the noiseless case, the stability and location of local optima are mostly unknown under noise, occlusion, or missing data. In practice, this can result in undesirable local optima or not having a local optimum in the expected place. On the other hand, numerical optimization algorithms in high-dimensional spaces are typically local and often rely on expensive first or second order information to guide the search. To overcome these limitations, this paper proposes Discriminative Optimization (DO), a method that learns search directions from data without the need of a cost function. Specifically, DO explicitly learns a sequence of updates in the search space that leads to stationary points that correspond to desired solutions. We provide a formal analysis of DO and illustrate its benefits in the problem of 2D and 3D point cloud registration both in synthetic and range-scan data. We show that DO outperforms state-of-the-art algorithms by a large margin in terms of accuracy, robustness to perturbations, and computational efficiency.",
"title": ""
},
{
"docid": "f5d1b4e182f2c5555ced3e5af1304093",
"text": "We identified 7 theoretical models that have been used to explain technology adoption and use. We then examined the boundary conditions of these models of technology adoption when applied to the household context using longitudinal empirical data from households regarding their purchase and use decisions related to household technologies. We conducted 2 studies and collected 1,247 responses from U.S. households for the first study and 2,064 responses from U.S. households for the second study. Those households that had adopted household technologies were surveyed regarding their use behavior. Potential adopters (i.e., those who had currently not adopted) were surveyed regarding their purchase intentions. This allowed us to identify the most influential factors affecting a household’s decision to adopt and use technologies. The results show that the model of adoption of technology in the household provided the richest explanation and explained best why households purchase and use technologies.",
"title": ""
},
{
"docid": "576c215649f09f2f6fb75369344ce17f",
"text": "The emergence of two new technologies, namely, software defined networking (SDN) and network function virtualization (NFV), have radically changed the development of network functions and the evolution of network architectures. These two technologies bring to mobile operators the promises of reducing costs, enhancing network flexibility and scalability, and shortening the time-to-market of new applications and services. With the advent of SDN and NFV and their offered benefits, the mobile operators are gradually changing the way how they architect their mobile networks to cope with ever-increasing growth of data traffic, massive number of new devices and network accesses, and to pave the way toward the upcoming fifth generation networking. This survey aims at providing a comprehensive survey of state-of-the-art research work, which leverages SDN and NFV into the most recent mobile packet core network architecture, evolved packet core. The research work is categorized into smaller groups according to a proposed four-dimensional taxonomy reflecting the: 1) architectural approach, 2) technology adoption, 3) functional implementation, and 4) deployment strategy. Thereafter, the research work is exhaustively compared based on the proposed taxonomy and some added attributes and criteria. Finally, this survey identifies and discusses some major challenges and open issues, such as scalability and reliability, optimal resource scheduling and allocation, management and orchestration, and network sharing and slicing that raise from the taxonomy and comparison tables that need to be further investigated and explored.",
"title": ""
},
{
"docid": "65db3963c690a80bbe86622da021595a",
"text": "This article presents a very efficient SLAM algorithm that works by hierarchically dividing a map into local regions and subregions. At each level of the hierarchy each region stores a matrix representing some of the landmarks contained in this region. To keep those matrices small, only those landmarks are represented that are observable from outside the region. A measurement is integrated into a local subregion using O(k2) computation time for k landmarks in a subregion. When the robot moves to a different subregion a full leastsquare estimate for that region is computed in only O(k3 log n) computation time for n landmarks. A global least square estimate needs O(kn) computation time with a very small constant (12.37 ms for n = 11300). The algorithm is evaluated for map quality, storage space and computation time using simulated and real experiments in an office environment.",
"title": ""
},
{
"docid": "02e1994a5f6ecd3f6f4cc362b6e5af3b",
"text": "Risk management has been recognized as an effective way to reduce system development failure. Information system development (ISD) is a highly complex and unpredictable activity associated with high risks. With more and more organizations outsource or offshore substantial resources in system development, organizations face up new challenges and risks not common to traditional development models. Classical risk management approaches have relied on tactical, bottomup analysis, which do not readily scale to distributed environment. Therefore, risk management in distributed environment is becoming a critical area of concern. This paper uses a systemic approach developed by Software Engineering Institute to identify risks of ISD in distributed environment. Four key risk factors were identified from prior literature: objective, preparation, execution, and environment. In addition, the impact of these four risk factors on the success of information system development will also be examined.",
"title": ""
},
{
"docid": "4f43a692ff8f6aed3a3fc4521c86d35e",
"text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.",
"title": ""
},
{
"docid": "8f01d2e70ec5da655418a6864e94b932",
"text": "Cloud storage services allow users to outsource their data to cloud servers to save on local data storage costs. However, unlike using local storage devices, users don't physically own the data stored on cloud servers and can't be certain about the integrity of the cloud-stored data. Many public verification schemes have been proposed to allow a third-party auditor to verify the integrity of outsourced data. However, most of these schemes assume that the auditors are honest and reliable, so are vulnerable to malicious auditors. Moreover, in most of these schemes, an external adversary could modify the outsourced data and tamper with the interaction messages between the cloud server and the auditor, thus invalidating the outsourced data integrity verification. This article proposes an efficient and secure public verification of data integrity scheme that protects against external adversaries and malicious auditors. The proposed scheme adopts a random masking technique to protect against external adversaries, and requires users to audit auditors' behaviors to prevent malicious auditors from fabricating verification results. It uses Bitcoin to construct unbiased challenge messages to thwart collusion between malicious auditors and cloud servers. A performance analysis demonstrates that the proposed scheme is efficient in terms of the user's auditing overhead.",
"title": ""
},
{
"docid": "752cf1c7cefa870c01053d87ff4f445c",
"text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.",
"title": ""
},
{
"docid": "3cdca28361b7c2b9525b476e9073fc10",
"text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "f4e6c9e4ed147a7864bd28d533b8ac38",
"text": "The Milky Way Galaxy contains an unknown number, N , of civilizations that emit electromagnetic radiation (of unknown wavelengths) over a finite lifetime, L. Here we are assuming that the radiation is not produced indefinitely, but within L as a result of some unknown limiting event. When a civilization stops emitting, the radiation continues traveling outward at the speed of light, c, but is confined within a shell wall having constant thickness, cL. We develop a simple model of the Galaxy that includes both the birthrate and detectable lifetime of civilizations to compute the possibility of a SETI detection at the Earth. Two cases emerge for radiation shells that are (1) thinner than or (2) thicker than the size of the Galaxy, corresponding to detectable lifetimes, L, less than or greater than the light-travel time, ∼ 100, 000 years, across the Milky Way, respectively. For case (1), each shell wall has a thickness smaller than the size of the Galaxy and intersects the galactic plane in a donut shape (annulus) that fills only a fraction of the Galaxy’s volume, inhibiting SETI detection. But the ensemble of such shell walls may still fill our Galaxy, and indeed may overlap locally, given a sufficiently high birthrate of detectable civilizations. In the second case, each radiation shell is thicker than the size of our Galaxy. Yet, the ensemble of walls may or may not yield a SETI detection depending on the civilization birthrate. We compare the number of different electromagnetic transmissions arriving at Earth to Drake’s N , the number of currently emitting civilizations, showing that they are equal to each other for both cases (1) and (2). However, for L < 100, 000 years, the transmissions arriving at Earth may come from distant civilizations long extinct, while civilizations still alive are sending signals yet to arrive.",
"title": ""
},
{
"docid": "b049e5249d3c0fc52706a54ee767480e",
"text": "In dialogical argumentation, it is often assumed that the involved parties will always correctly identify the intended statements posited by each other and realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views whenever new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, dropping some of them has become quite challenging. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed or disbelieved by a given person, offering us the possibility to express more than just three agreement states. It comes equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed. Thus, this approach is potentially more suitable for handling beliefs of the people that have not fully disclosed their opinions or counterarguments with respect to standard Dung’s semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including situations in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we will describe the results of an experiment in which participants were asked to judge dialogues in terms of agreement and structure. We will compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation. Keywords— Dialogical argumentation, probabilistic argumentation, abstract argumentation ∗This research is funded by EPSRC Project EP/N008294/1 “Framework for Computational Persuasion”.We thank the reviewers for their valuable comments that helped us to improve this paper.",
"title": ""
},
{
"docid": "5a83cb0ef928b6cae6ce1e0b21d47f60",
"text": "Software defined networking, characterized by a clear separation of the control and data planes, is being adopted as a novel paradigm for wired networking. With SDN, network operators can run their infrastructure more efficiently, supporting faster deployment of new services while enabling key features such as virtualization. In this article, we adopt an SDN-like approach applied to wireless mobile networks that will not only benefit from the same features as in the wired case, but will also leverage on the distinct features of mobile deployments to push improvements even further. We illustrate with a number of representative use cases the benefits of the adoption of the proposed architecture, which is detailed in terms of modules, interfaces, and high-level signaling. We also review the ongoing standardization efforts, and discuss the potential advantages and weaknesses, and the need for a coordinated approach.",
"title": ""
},
{
"docid": "11b11bf5be63452e28a30b4494c9a704",
"text": "Advertisement and Brand awareness plays an important role in brand building, brand recognition, brand loyalty and boost up the sales performance which is regarded as the foundation for brand development. To some degree advertisement and brand awareness can directly influence consumers’ buying behavior. The female consumers from IT industry have been taken as main consumers for the research purpose. The researcher seeks to inspect and investigate brand’s intention factors and consumer’s individual factors in influencing advertisement and its impact of brand awareness on fast moving consumer goods especially personal care products .The aim of the paper is to examine the advertising and its impact of brand awareness towards FMCG Products, on the other hand, to analyze the influence of advertising on personal care products among female consumers in IT industry and finally to study the impact of media on advertising & brand awareness. The prescribed survey were conducted in the form of questionnaire and found valid and reliable for this research. After evaluating some questions, better questionnaires were developed. Then the questionnaires were distributed among 200 female consumers with a response rate of 100%. We found that advertising has constantly a significant positive effect on brand awareness and consumers perceive the brand awareness with positive attitude. Findings depicts that advertising and brand awareness have strong positive influence and considerable relationship with purchase intention of the consumer. This research highlights that female consumers of personal care products in IT industry are more brand conscious and aware about their personal care products. Advertisement and brand awareness affects their purchase intention positively; also advertising media positively influences the brand awareness and purchase intention of the female consumers. The obtained data were then processed by Pearson correlation, multiple regression analysis and ANOVA. A Study On Advertising And Its Impact Of Brand Awareness On Fast Moving Consumer Goods With Reference To Personal Care Products In Chennai Paper ID IJIFR/ V2/ E9/ 068 Page No. 3325-3333 Subject Area Business Administration",
"title": ""
}
] |
scidocsrr
|
cf599080ac452ddf42ef829d43d97f1f
|
Enabling and Exploiting Flexible Task Assignment on GPU through SM-Centric Program Transformations
|
[
{
"docid": "711675a8e053e963ae59290db94cb75f",
"text": "Heterogeneous multiprocessors are increasingly important in the multi-core era due to their potential for high performance and energy efficiency. In order for software to fully realize this potential, the step that maps computations to processing elements must be as automated as possible. However, the state-of-the-art approach is to rely on the programmer to specify this mapping manually and statically. This approach is not only labor intensive but also not adaptable to changes in runtime environments like problem sizes and hardware/software configurations. In this study, we propose adaptive mapping, a fully automatic technique to map computations to processing elements on a CPU+GPU machine. We have implemented it in our experimental heterogeneous programming system called Qilin. Our results show that, by judiciously distributing works over the CPU and GPU, automatic adaptive mapping achieves a 25% reduction in execution time and a 20% reduction in energy consumption than static mappings on average for a set of important computation benchmarks. We also demonstrate that our technique is able to adapt to changes in the input problem size and system configuration.",
"title": ""
}
] |
[
{
"docid": "5d398e35d6dc58b56a9257623cb83db0",
"text": "BACKGROUND\nAlthough much has been published with regard to the columella assessed on the frontal and lateral views, a paucity of literature exists regarding the basal view of the columella. The objective of this study was to evaluate the spectrum of columella deformities and devise a working classification system based on underlying anatomy.\n\n\nMETHODS\nA retrospective study was performed of 100 consecutive patients who presented for primary rhinoplasty. The preoperative basal view photographs for each patient were reviewed to determine whether they possessed ideal columellar aesthetics. Patients who had deformity of their columella were further scrutinized to determine the most likely underlying cause of the subsequent abnormality.\n\n\nRESULTS\nOf the 100 patient photographs assessed, only 16 (16 percent) were found to display ideal norms of the columella. The remaining 84 of 100 patients (84 percent) had some form of aesthetic abnormality and were further classified based on the most likely underlying cause. Type 1 deformities (caudal septum and/or spine) constituted 18 percent (18 of 100); type 2 (medial crura), 12 percent (12 of 100); type 3 (soft tissue), 6 percent (six of 100); and type 4 (combination), 48 percent (48 of 100).\n\n\nCONCLUSIONS\nDeformities may be classified according to the underlying cause, with combined deformity being the most common. Use of the herein discussed classification scheme will allow surgeons to approach this region in a comprehensive manner. Furthermore, use of such a system allows for a more standardized approach for surgical treatment.",
"title": ""
},
{
"docid": "cf6d0e1b0fd5a258fdcdb5a9fe8d2b65",
"text": "UNLABELLED\nPrevious studies have shown that resistance training with restricted venous blood flow (Kaatsu) results in significant strength gains and muscle hypertrophy. However, few studies have examined the concurrent vascular responses following restrictive venous blood flow training protocols.\n\n\nPURPOSE\nThe purpose of this study was to examine the effects of 4 wk of handgrip exercise training, with and without venous restriction, on handgrip strength and brachial artery flow-mediated dilation (BAFMD).\n\n\nMETHODS\nTwelve participants (mean +/- SD: age = 22 +/- 1 yr, men = 5, women = 7) completed 4 wk of bilateral handgrip exercise training (duration = 20 min, intensity = 60% of the maximum voluntary contraction, cadence = 15 grips per minute, frequency = three sessions per week). During each session, venous blood flow was restricted in one arm (experimental (EXP) arm) using a pneumatic cuff placed 4 cm proximal to the antecubital fossa and inflated to 80 mm Hg for the duration of each exercise session. The EXP and the control (CON) arms were randomly selected. Handgrip strength was measured using a hydraulic hand dynamometer. Brachial diameters and blood velocity profiles were assessed, using Doppler ultrasonography, before and after 5 min of forearm occlusion (200 mm Hg) before and at the end of the 4-wk exercise.\n\n\nRESULTS\nAfter exercise training, handgrip strength increased 8.32% (P = 0.05) in the CON arm and 16.17% (P = 0.05) in the EXP arm. BAFMD increased 24.19% (P = 0.0001) in the CON arm and decreased 30.36% (P = 0.0001) in the EXP arm.\n\n\nCONCLUSIONS\nThe data indicate handgrip training combined with venous restriction results in superior strength gains but reduced BAFMD compared with the nonrestricted arm.",
"title": ""
},
{
"docid": "25f9981f81350d9f30dae2284377eb08",
"text": "In traditional computer games, it is not uncommon for the game world to be inhabited by numerous computer-generated characters, Non-Player Characters (NPCs). In pervasive games, players play among human non-players as well and it becomes very tempting to use them as a game asset; as non-playing characters. Humans behave unpredictably and intelligently, and for this reason games set in real social context become more challenging for players than any preprogrammed environment can be. But however tempting the idea is, the use of non-players has implications on people's personal privacy. We report on a scenario-based study where people were interviewed about a set of game designs, all to some extent relying on information about non-players. We propose that in particular non-player anonymity and the ability to hold players accountable for their actions will affect non-player acceptance of pervasive games.",
"title": ""
},
{
"docid": "fd8f4206ae749136806a35c0fe1597c7",
"text": "In this paper, an inductor-inductor-capacitor (LLC) resonant dc-dc converter design procedure for an onboard lithium-ion battery charger of a plug-in hybrid electric vehicle (PHEV) is presented. Unlike traditional resistive load applications, the characteristic of a battery load is nonlinear and highly related to the charging profiles. Based on the features of an LLC converter and the characteristics of the charging profiles, the design considerations are studied thoroughly. The worst-case conditions for primary-side zero-voltage switching (ZVS) operation are analytically identified based on fundamental harmonic approximation when a constant maximum power (CMP) charging profile is implemented. Then, the worst-case operating point is used as the design targeted point to ensure soft-switching operation globally. To avoid the inaccuracy of fundamental harmonic approximation approach in the below-resonance region, the design constraints are derived based on a specific operation mode analysis. Finally, a step-by-step design methodology is proposed and validated through experiments on a prototype converting 400 V from the input to an output voltage range of 250-450 V at 3.3 kW with a peak efficiency of 98.2%.",
"title": ""
},
{
"docid": "747319dc1492cf26e9b9112e040cbba7",
"text": "Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because of tracking inaccuracies, incomplete coverage of motions, low framerate, complex camera setups, and high computational requirements. In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detectionguided optimization strategy that increases the robustness and speed of pose estimation. In the detection step, a randomized decision forest classifies pixels into parts of the hand. In the optimization step, a novel objective function combines the detected part labels and a Gaussian mixture representation of the depth to estimate a pose that best fits the depth. Our approach needs comparably less computational resources which makes it extremely fast (50 fps without GPU support). The approach also supports varying static, or moving, camera-to-scene arrangements. We show the benefits of our method by evaluating on public datasets and comparing against previous work.",
"title": ""
},
{
"docid": "2d28ee78421cec2285c35e69b8bddcc1",
"text": "LAPAROSCOPIC TRANSABDOMINAL PRE-PERITONEAL PROCEDURE (TAPP) FOR GROIN HERNIA. HOW TO DO IT FOR BETTER OUTCOMES (Abstract): The laparoscopic approach for the groin hernia repair has several advantages: decreased immediate and late postoperative pain, less numbness in inguinal aria, less mesh infection and a rapid recovery. However the good outcomes are not granted, and there are some key points to be followed for better postoperative results. The aim of this video is to highlight these TAPP (TransAbdominal Pre-Peritoneal) related key points, from the operative indication, pre operative preparation and surgical procedure, until the post operative follow up.",
"title": ""
},
{
"docid": "9bfba29f44c585df56062582d4e35ba5",
"text": "We address the problem of optimizing recommender systems for multiple relevance objectives that are not necessarily aligned. Specifically, given a recommender system that optimizes for one aspect of relevance, semantic matching (as defined by any notion of similarity between source and target of recommendation; usually trained on CTR), we want to enhance the system with additional relevance signals that will increase the utility of the recommender system, but that may simultaneously sacrifice the quality of the semantic match. The issue is that semantic matching is only one relevance aspect of the utility function that drives the recommender system, albeit a significant aspect. In talent recommendation systems, job posters want candidates who are a good match to the job posted, but also prefer those candidates to be open to new opportunities. Recommender systems that recommend discussion groups must ensure that the groups are relevant to the users' interests, but also need to favor active groups over inactive ones. We refer to these additional relevance signals (job-seeking intent and group activity) as extraneous features, and they account for aspects of the utility function that are not captured by the semantic match (i.e. post-CTR down-stream utilities that reflect engagement: time spent reading, sharing, commenting, etc). We want to include these extraneous features into the recommendations, but we want to do so while satisfying the following requirements: 1) we do not want to drastically sacrifice the quality of the semantic match, and 2) we want to quantify exactly how the semantic match would be affected as we control the different aspects of the utility function. In this paper, we present an approach that satisfies these requirements.\n We frame our approach as a general constrained optimization problem and suggest ways in which it can be solved efficiently by drawing from recent research on optimizing non-smooth rank metrics for information retrieval. Our approach features the following characteristics: 1) it is model and feature agnostic, 2) it does not require additional labeled training data to be collected, and 3) it can be easily incorporated into an existing model as an additional stage in the computation pipeline. We validate our approach in a revenue-generating recommender system that ranks billions of candidate recommendations on a daily basis and show that a significant improvement in the utility of the recommender system can be achieved with an acceptable and predictable degradation in the semantic match quality of the recommendations.",
"title": ""
},
{
"docid": "422c0890804654613ea37fbf1186fda1",
"text": "Because of the distance between the skull and brain and their di erent resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve signi cant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski [1] is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identi cation from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to di erent random seeds. (2) ICA may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA is capable of isolating overlapping EEG phenomena, including alpha and theta bursts and spatially-separable ERP components, to separate ICA channels. (4) Nonstationarities in EEG and behavioral state can be tracked using ICA via changes in the amount of residual correlation between ICAltered output channels.",
"title": ""
},
{
"docid": "4622df5210b363fbbecc9653894f9734",
"text": "Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.",
"title": ""
},
{
"docid": "0d9cd7cbb37c410b1255f4f600c77c43",
"text": "We present a nonparametric Bayesian approach to inverse rei nforcement learning (IRL) for multiple reward functions. Most previous IRL algo rithms assume that the behaviour data is obtained from an agent who is optimizin g a single reward function, but this assumption is hard to guarantee in practi ce. Our approach is based on integrating the Dirichlet process mixture model in to Bayesian IRL. We provide an efficient Metropolis-Hastings sampling algorit hm utilizing the gradient of the posterior to estimate the underlying reward function s, and demonstrate that our approach outperforms previous ones via experiments on a number of problem domains.",
"title": ""
},
{
"docid": "4dd61fb075d18949fab229bea7b7ee5b",
"text": "1 Asst Professor IMS Engg College Ghaziabad, Uttar Pradesh, India 2 Professor , Institute of Management Technology, Ghaziabad, Uttar Pradesh, India _________________________________________________________________________________________ Abstract: In this paper we present an Adaptive approach for image segmentation using genetic algorithm, a natural evolutionary approach for optimisation problems. In addition, method of implementing genetic algorithm has also been reviewed with a summary of research work on Adaptive approach for image segmentation techniques. We have proposed an efficient color image segmentation using Adaptive approach along with genetic algorithm. The advantage of the proposed method lies in its utilisation of prior knowledge of the RGB image to segment the image efficiently. __________________________________________________________________________________________",
"title": ""
},
{
"docid": "70593bbda6c88f0ac10e26768d74b3cd",
"text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that oen results in multiple complications. Risk prediction and proling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications aer the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Specically, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the dierent risk factors, and (3) between the risk factor selection paerns. e method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. e proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the dierent risks and risk factors are also identied. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signicant margin. Furthermore, we show that the risk associations learned and the risk factors identied lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;",
"title": ""
},
{
"docid": "bd75a8e68bbace41cc316ab34976d555",
"text": "This letter proposes a small-size printed antenna with multiband WWAN/LTE operation in a mobile phone by introducing a novel loop parasitic shorted strip and a C-shaped ground plane. The obtained impedance bandwidths across LTE and WWAN operating bands approach 277 and 1176 MHz, respectively. The proposed printed monopole antenna reduces the antenna size by at least 22% since the overall antenna size is only 35 × 10 × 0.8 mm3. The measured peak gains and antenna efficiencies are approximately 2.2/3.1 dBi and 76%/82% for the LTE/WWAN bands, respectively.",
"title": ""
},
{
"docid": "8150f588c5eb3919d13f976fec58b736",
"text": "We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.",
"title": ""
},
{
"docid": "a2b2607e4af771632912900d63999f40",
"text": "In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).",
"title": ""
},
{
"docid": "b42c230ff1af8da8b8b4246bc9cb2bd8",
"text": "Patients have many restorative options for changing the appearance of their teeth. The most conservative restorative treatments for changing the appearance of teeth include tooth bleaching, direct composite resin veneers, and porcelain veneers. Patients seeking esthetic treatment should undergo a comprehensive clinical examination that includes an esthetic evaluation. When selecting a conservative treatment modality, the use of minimally invasive or no-preparation porcelain veneers should be considered. As with any treatment decision, the indications and contraindications must be considered before a definitive treatment plan is made. Long-term research has demonstrated a 94% survival rate for minimally invasive porcelain veneers. While conservation of tooth structure is important, so is selecting the right treatment modality for each patient based on clinical findings.",
"title": ""
},
{
"docid": "102a9eb7ba9f65a52c6983d74120430e",
"text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.",
"title": ""
},
{
"docid": "178ed5e35d48a8936a62b0d3a64f6af3",
"text": "We describe a congenital deformity of the foot which is characterised by calcaneus at the ankle and valgus at the subtalar joint; spontaneous improvement does not occur and serial casting results in incomplete or impermanent correction of the deformities. Experience with five feet in four children indicates that release of the ligaments and tendons anterior and lateral to the ankle and lateral to the subtalar joint is the minimum surgery necessary; subtalar arthrodesis may be required in addition. The foot deformity described may occur as an isolated condition or in association with multiple congenital anomalies. The possibility of a neurological deficit should always be excluded.",
"title": ""
},
{
"docid": "688dc1cc592e1fcd60445e640d8294d8",
"text": "Techniques for high dynamic range (HDR) imaging make it possible to capture and store an increased range of luminances and colors as compared to what can be achieved with a conventional camera. This high amount of image information can be used in a wide range of applications, such as HDR displays, image-based lighting, tone-mapping, computer vision, and post-processing operations. HDR imaging has been an important concept in research and development for many years. Within the last couple of years it has also reached the consumer market, e.g. with TV displays that are capable of reproducing an increased dynamic range and peak luminance. This thesis presents a set of technical contributions within the field of HDR imaging. First, the area of HDR video tone-mapping is thoroughly reviewed, evaluated and developed upon. A subjective comparison experiment of existing methods is performed, followed by the development of novel techniques that overcome many of the problems evidenced by the evaluation. Second, a largescale objective comparison is presented, which evaluates existing techniques that are involved in HDR video distribution. From the results, a first open-source HDR video codec solution, Luma HDRv, is built using the best performing techniques. Third, a machine learning method is proposed for the purpose of reconstructing an HDR image from one single-exposure low dynamic range (LDR) image. The method is trained on a large set of HDR images, using recent advances in deep learning, and the results increase the quality and performance significantly as compared to existing algorithms. The areas for which contributions are presented can be closely inter-linked in the HDR imaging pipeline. Here, the thesis work helps in promoting efficient and high-quality HDR video distribution and display, as well as robust HDR image reconstruction from a single conventional LDR image.",
"title": ""
},
{
"docid": "bcb756857adef42264eab0f1361f8be7",
"text": "The problem of multi-class boosting is considered. A new fra mework, based on multi-dimensional codewords and predictors is introduced . The optimal set of codewords is derived, and a margin enforcing loss proposed. The resulting risk is minimized by gradient descent on a multidimensional functi onal space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate des cent, updates one predictor component at a time, 2) GD-MCBoost, based on gradi ent descent, updates all components jointly. The algorithms differ in the w ak learners that they support but are both shown to be 1) Bayes consistent, 2) margi n enforcing, and 3) convergent to the global minimum of the risk. They also red uce to AdaBoost when there are only two classes. Experiments show that both m et ods outperform previous multiclass boosting approaches on a number of data sets.",
"title": ""
}
] |
scidocsrr
|
c7d397f24b726624c17260a9318f14a2
|
Automated Classification of Stance in Student Essays: An Approach Using Stance Target Information and the Wikipedia Link-Based Measure
|
[
{
"docid": "fc70a1820f838664b8b51b5adbb6b0db",
"text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.",
"title": ""
},
{
"docid": "64b13ae694ec4c16cdbd59ceecec0915",
"text": "Determining the stance expressed by an author from a post written for a twosided debate in an online debate forum is a relatively new problem. We seek to improve Anand et al.’s (2011) approach to debate stance classification by modeling two types of soft extra-linguistic constraints on the stance labels of debate posts, user-interaction constraints and ideology constraints. Experimental results on four datasets demonstrate the effectiveness of these inter-post constraints in improving debate stance classification.",
"title": ""
}
] |
[
{
"docid": "61909a81470a9fea27a2f12aadb2c183",
"text": "One of the major research areas attracting much interest is face recognition. This is due to the growing need of detection and recognition in the modern days' industrial applications. However, this need is conditioned with the high performance standards that these applications require in terms of speed and accuracy. In this work we present a comparison between two main techniques of face recognition in unconstraint scenes. The first one is Edge-Orientation Matching and the second technique is Haar-like feature selection combined cascade classifiers.",
"title": ""
},
{
"docid": "564e3f6b8deb91ab6ba096ee2b8bd0a3",
"text": "A hybrid model for social media popularity prediction is proposed by combining Convolutional Neural Network (CNN) with XGBoost. The CNN model is exploited to learn high-level representations from the social cues of the data. These high-level representations are used in XGBoost to predict the popularity of the social posts. We evaluate our approach on a real-world Social Media Prediction (SMP) dataset, which consists of 432K Flickr images. The experimental results show that the proposed approach is effective, achieving the following performance: Spearman's Rho: 0.7406, MSE: 2.7293, MAE: 1.2475.",
"title": ""
},
{
"docid": "a6226d78ea975a5028ca2419fed44af0",
"text": "We demonstrate a protocol for proving strongly that a black-box machine learning technique robustly predicts the future in dynamic, indefinite contexts. We propose necessary components of the proof protocol and demonstrate results visualizations to support evaluation of the proof components. Components include contemporaneously verifiable discrete predictions, deterministic computability of longitudinal predictions, imposition of realistic costs and domain constraints, exposure to diverse contexts, statistically significant excess benefits relative to a priori benchmarks and Monte Carlo trials, insignificant decay of excess benefits, pathology detection and an extended real-time trial \"in the wild.\" We apply the protocol to a big data machine learning technique deployed since 2011 that finds persistent, exploitable opportunities in many of 41 segments of US financial markets, the existence of which opportunities substantially contradict the Efficient Market Hypothesis.",
"title": ""
},
{
"docid": "7e1712f9e2846862d072c902a84b2832",
"text": "Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse Skill Library. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.",
"title": ""
},
{
"docid": "71c4e6e63eaeec06b5e8690c1a915c81",
"text": "Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented.",
"title": ""
},
{
"docid": "65e64a012a064603f65d02881d7d629b",
"text": "BACKGROUND\nThere is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak.\n\n\nOBJECTIVE\nTo develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees.\n\n\nMETHODS\nWe developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets.\n\n\nRESULTS\nThe computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy.\n\n\nCONCLUSION\nThe proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.",
"title": ""
},
{
"docid": "42af6ec7bc66a2ff9aa0d7bc90f9d76a",
"text": "In this paper, we propose a novel scene detection algorithm which employs semantic, visual, textual, and audio cues. We also show how the hierarchical decomposition of the storytelling video structure can improve retrieval results presentation with semantically and aesthetically effective thumbnails. Our method is built upon two advancements of the state of the art: first is semantic feature extraction which builds video-specific concept detectors; and second is multimodal feature embedding learning that maps the feature vector of a shot to a space in which the Euclidean distance has task specific semantic properties. The proposed method is able to decompose the video in annotated temporal segments which allow us for a query specific thumbnail extraction. Extensive experiments are performed on different data sets to demonstrate the effectiveness of our algorithm. An in-depth discussion on how to deal with the subjectivity of the task is conducted and a strategy to overcome the problem is suggested.",
"title": ""
},
{
"docid": "c6e29402f386e466254d99b677b9e18b",
"text": "A planar Yagi-Uda antenna with a single director, a meandered driven dipole, and a concave parabolic reflector on a thin dielectric substrate is proposed. Through this design, the high directivity of 7.3 dBi, front-to-back ratio of 14.7 dB, cross-polarization level of −39.1 dB, bandwidth of 5.8%, and the radiation efficiency of 87.5%, which is better than −1 dBi in terms of the 3D average gain, can be achieved. Besides, the area of this antenna is much smaller than that of the previously proposed one by about 78%. Therefore, the proposed antenna is suitable for the GPS (Global Positioning System) application in mobile devices whose volumes are usually not sufficient for embedded antennas with a good RHCP (right hand circular polarization) AR (axial ratio) values and not enough angular coverage of the designed AR values.",
"title": ""
},
{
"docid": "77c7f144c63df9022434313cfe2e5290",
"text": "Today the prevalence of online banking is enormous. People prefer to accomplish their financial transactions through the online banking services offered by their banks. This method of accessing is more convenient, quicker and secured. Banks are also encouraging their customers to opt for this mode of e-banking facilities since that result in cost savings for the banks and there is better customer satisfaction. An important aspect of online banking is the precise authentication of users before allowing them to access their accounts. Typically this is done by asking the customers to enter their unique login id and password combination. The success of this authentication relies on the ability of customers to maintain the secrecy of their passwords. Since the customer login to the banking portals normally occur in public environments, the passwords are prone to key logging attacks. To avoid this, virtual keyboards are provided. But virtual keyboards are vulnerable to shoulder surfing based attacks. In this paper, a secured virtual keyboard scheme that withstands such attacks is proposed. Elaborate user studies carried out on the proposed scheme have testified the security and the usability of the proposed approach.",
"title": ""
},
{
"docid": "e2175b85f438342a84453b5ad36ab4c5",
"text": "This paper presents a systematic analysis of integrated 3-level buck converters under both ideal and real conditions as a guidance for designing robust and fast 3-level buck converters. Under ideal conditions, the voltage conversion ratio, the output voltage ripple and, in particular, the system's loop-gain function are derived. Design considerations for real circuitry implementations of an integrated 3-level converter, such as the implementation of the flying capacitor, the impacts of the parasitic capacitors of the flying capacitor and the 4 power switches, and the time mismatch between the 2 duty-cycle signals are thoroughly discussed. Under these conditions, the voltage conversion ratio, the voltage across the flying capacitor and the power efficiency are analyzed and verified with Cadence simulation results. The loop-gain function of an integrated 3-level buck converter with parasitic capacitors and time mismatch is derived with the state-space averaging method. The derived loop-gain functions are verified with time-domain small signal injection simulation and measurement, with a good match between the analytical and experimental results.",
"title": ""
},
{
"docid": "dc37599e63b1cde2b1a7661fb8866305",
"text": "The development of new functional foods requires technologies for incorporating health-promoting ingredients into food without reducing their bioavailability or functionality. In many cases, microencapsulation can provide the necessary protection for these compounds, but in all cases bioavailability should be carefully studied. The present paper gives an overview of the application of various microencapsulation technologies to nutritionally-important compounds, i.e. vitamins, n-3 polyunsaturated fatty acids, Ca, Fe and antioxidants. It also gives a view on future technologies and trends in microencapsulation technology for nutritional applications.",
"title": ""
},
{
"docid": "ce7f1295fec9a9845ef87bbee5eef219",
"text": "Sentiment Analysis (SA) is a major field of study in natural language processing, computational linguistics and information retrieval. Interest in SA has been constantly growing in both academia and industry over the recent years. Moreover, there is an increasing need for generating appropriate resources and datasets in particular for low resource languages including Persian. These datasets play an important role in designing and developing appropriate opinion mining platforms using supervised, semi-supervised or unsupervised methods. In this paper, we outline the entire process of developing a manually annotated sentiment corpus, SentiPers, which covers formal and informal written contemporary Persian. To the best of our knowledge, SentiPers is a unique sentiment corpus with such a rich annotation in three different levels including document-level, sentence-level, and entity/aspect-level for Persian. The corpus contains more than 26,000 sentences of users’ opinions from digital product domain and benefits from special characteristics such as quantifying the positiveness or negativity of an opinion through assigning a number within a specific range to any given sentence. Furthermore, we present statistics on various components of our corpus as well as studying the inter-annotator agreement among the annotators. Finally, some of the challenges that we faced during the annotation process will be discussed as well.",
"title": ""
},
{
"docid": "118f6ab5b61a334ff8a23f5c139c110c",
"text": "Many tasks in the biomedical domain require the assignment of one or more predefined labels to input text, where the labels are a part of a hierarchical structure (such as a taxonomy). The conventional approach is to use a one-vs.-rest (OVR) classification setup, where a binary classifier is trained for each label in the taxonomy or ontology where all instances not belonging to the class are considered negative examples. The main drawbacks to this approach are that dependencies between classes are not leveraged in the training and classification process, and the additional computational cost of training parallel classifiers. In this paper, we apply a new method for hierarchical multi-label text classification that initializes a neural network model final hidden layer such that it leverages label co-occurrence relations such as hypernymy. This approach elegantly lends itself to hierarchical classification. We evaluated this approach using two hierarchical multi-label text classification tasks in the biomedical domain using both sentenceand document-level classification. Our evaluation shows promising results for this approach.",
"title": ""
},
{
"docid": "3df496326783d1bfe4854df3edb2dbd6",
"text": "Based on the expectancy disconfirmation theory, this study proposes a decomposed technology acceptance model in the context of an e-learning service. In the proposed model, the perceived performance component is decomposed into perceived quality and perceived usability. A sample of 172 respondents took part in this study. The results suggest that users’ continuance intention is determined by satisfaction, which in turn is jointly determined by perceived usefulness, information quality, confirmation, service quality, system quality, perceived ease of use and cognitive absorption. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "947f6f1f2b5cbd646bfa9426cdfda7fe",
"text": "In many real-world learning tasks it is expensive to acquire a su cient number of labeled examples for training. This paper investigates methods for reducing annotation cost by sample selection. In this approach, during training the learning program examines many unlabeled examples and selects for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, and extends the committee-based paradigm to the context of probabilistic classi cation. We describe a family of empirical methods for committee-based sample selection in probabilistic classi cation models, which evaluate the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set labeled so far. The method was applied to the real-world natural language processing task of stochastic part-of-speech tagging. We nd that all variants of the method achieve a signi cant reduction in annotation cost, although their computational e ciency di ers. In particular, the simplest variant, a two member committee with no parameters to tune, gives excellent results. We also show that sample selection yields a signi cant reduction in the size of the model used by the tagger.",
"title": ""
},
{
"docid": "7804d1c4ec379ed47d45917786946b2f",
"text": "Data mining technology has been applied to library management. In this paper, Boustead College Library Information Management System in the history of circulation records, the reader information and collections as a data source, using the Microsoft SQL Server 2005 as a data mining tool, applying data mining algorithm as cluster, association rules and time series to identify characteristics of the reader to borrow in order to achieve individual service.",
"title": ""
},
{
"docid": "e03640352c1b0074a0bdd21cafbda61e",
"text": "The problem of finding an automatic thresholding technique is well known in applications involving image differencing like visual-based surveillance systems, autonomous vehicle driving, etc. Among the algorithms proposed in the past years, the thresholding technique based on the stable Euler number method is considered one of the most promising in terms of visual results. Unfortunately its high computational complexity made it an impossible choice for real-time applications. The implementation here proposed, called fast Euler numbers, overcomes the problem since it calculates all the Euler numbers in just one single raster scan of the image. That is, it runs in OðhwÞ, where h and w are the image s height and width, respectively. A technique for determining the optimal threshold, called zero crossing, is also proposed. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "dc76c7e939d26a6a81a8eb891b5824b7",
"text": "While deeper and wider neural networks are actively pushing the performance limits of various computer vision and machine learning tasks, they often require large sets of labeled data for effective training and suffer from extremely high computational complexity. In this paper, we will develop a new framework for training deep neural networks on datasets with limited labeled samples using cross-network knowledge projection which is able to improve the network performance while reducing the overall computational complexity significantly. Specifically, a large pre-trained teacher network is used to observe samples from the training data. A projection matrix is learned to project this teacher-level knowledge and its visual representations from an intermediate layer of the teacher network to an intermediate layer of a thinner and faster student network to guide and regulate its training process. Both the intermediate layers from the teacher network and the injection layers from the student network are adaptively selected during training by evaluating a joint loss function in an iterative manner. This knowledge projection framework allows us to use crucial knowledge learned by large networks to guide the training of thinner student networks, avoiding over-fitting, achieving better network performance, and significantly reducing the complexity. Extensive experimental results on benchmark datasets have demonstrated that our proposed knowledge projection approach outperforms existing methods, improving accuracy by up to 4% while reducing network complexity by 4 to 10 times, which is very attractive for practical applications of deep neural networks.",
"title": ""
},
{
"docid": "e2589af8d7cb0958ed9225d58be895df",
"text": "The crowding of wireless band has necessitated the development of multiband and wideband wireless antennas. Because of the self similar characteristics, fractal concepts have emerged as a design methodology for compact multiband antennas. A Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry, used for multiband applications. ADS software is used to design the proposed antennna. It has covers the applications like GSM, wireless band and other wireless communications. Keywords—fractal; koch curve; bow tie antenna; ADS(Advanced Design System);",
"title": ""
},
{
"docid": "61e75fb597438712098c2b6d4b948558",
"text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.",
"title": ""
}
] |
scidocsrr
|
999ec8eb20ddcc3014dd43bf2811bb74
|
Sentiment analysis using Support Vector Machine
|
[
{
"docid": "e4c493697d9bece8daec6b2dd583e6bb",
"text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.",
"title": ""
},
{
"docid": "008ad9d12f1a8451f46be59eeef5bf0b",
"text": "0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.05.070 ⇑ Corresponding author. Tel.: +34 953 212898; fax: E-mail address: msaleh@ujaen.es (M. Rushdi Saleh 1 http://www.amazon.com. 2 http://www.epinions.com. 3 http://www.imdb.com. Recently, opinion mining is receiving more attention due to the abundance of forums, blogs, e-commerce web sites, news reports and additional web sources where people tend to express their opinions. Opinion mining is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. In this paper we explore this new research area applying Support Vector Machines (SVM) for testing different domains of data sets and using several weighting schemes. We have accomplished experiments with different features on three corpora. Two of them have already been used in several works. The last one has been built from Amazon.com specifically for this paper in order to prove the feasibility of the SVM for different domains. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "80faeaceefd3851b51feef2e50694ef7",
"text": "The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems. 2009 Published by Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "0575675618e2f2325b8e398a26291611",
"text": "We address the problem of temporal action localization in videos. We pose action localization as a structured prediction over arbitrary-length temporal windows, where each window is scored as the sum of frame-wise classification scores. Additionally, our model classifies the start, middle, and end of each action as separate components, allowing our system to explicitly model each actions temporal evolution and take advantage of informative temporal dependencies present in that structure. In this framework, we localize actions by searching for the structured maximal sum, a problem for which we develop a novel, provably-efficient algorithmic solution. The frame-wise classification scores are computed using features from a deep Convolutional Neural Network (CNN), which are trained end-to-end to directly optimize for a novel structured objective. We evaluate our system on the THUMOS 14 action detection benchmark and achieve competitive performance.",
"title": ""
},
{
"docid": "b156acf3a04c8edd6e58c859009374d6",
"text": "Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph substructures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.",
"title": ""
},
{
"docid": "86069ba30042606be2a50780b81ce5d8",
"text": "This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.",
"title": ""
},
{
"docid": "dfa5334f77bba5b1eeb42390fed1bca3",
"text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.",
"title": ""
},
{
"docid": "3d11d784839910fdc1d2093db3d7c762",
"text": "This paper presents a detailed investigation of the influence of pin gap size on the S-parameters of the 1.85 mm connector. In contrast to earlier publications connector geometry is simulated with all chamfers, gaps and contact fingers. Simulation results are verified by cross-checking between finite element frequency domain and finite difference time domain methods. Based on reliable simulation results, a very fast tool was developed to calculate S-parameters for a given connector geometry. This was done using database and interpolation techniques. The most important result is that very small pin gaps in conjunction with large chamfers have a drastic impact on connector S-parameters for frequencies above 50 GHz.",
"title": ""
},
{
"docid": "035f780309fc777ece17cbfe4aabc01b",
"text": "The phenolic composition and antibacterial and antioxidant activities of the green alga Ulva rigida collected monthly for 12 months were investigated. Significant differences in antibacterial activity were observed during the year with the highest inhibitory effect in samples collected during spring and summer. The highest free radical scavenging activity and phenolic content were detected in U. rigida extracts collected in late winter (February) and early spring (March). The investigation of the biological properties of U. rigida fractions collected in spring (April) revealed strong antimicrobial and antioxidant activities. Ethyl acetate and n-hexane fractions exhibited substantial acetylcholinesterase inhibitory capacity with EC50 of 6.08 and 7.6 μg mL−1, respectively. The total lipid, protein, ash, and individual fatty acid contents of U. rigida were investigated. The four most abundant fatty acids were palmitic, oleic, linolenic, and eicosenoic acids.",
"title": ""
},
{
"docid": "749785a3973c6d2d760cbfbe6f1dbdac",
"text": "Research on spreadsheet errors is substantial, compelling, and unanimous. It has three simple conclusions. The first is that spreadsheet errors are rare on a per-cell basis, but in large programs, at least one incorrect bottom-line value is very likely to be present. The second is that errors are extremely difficult to detect and correct. The third is that spreadsheet developers and corporations are highly overconfident in the accuracy of their spreadsheets. The disconnect between the first two conclusions and the third appears to be due to the way human cognition works. Most importantly, we are aware of very few of the errors we make. In addition, while we are proudly aware of errors that we fix, we have no idea of how many remain, but like Little Jack Horner we are impressed with our ability to ferret out errors. This paper reviews human cognition processes and shows first that humans cannot be error free no matter how hard they try, and second that our intuition about errors and how we can reduce them is based on appallingly bad knowledge. This paper argues that we should reject any prescription for reducing errors that has not been rigorously proven safe and effective. This paper also argues that our biggest need, based on empirical data, is to do massively more testing than we do now. It suggests that the code inspection methodology developed in software development is likely to apply very well to spreadsheet inspection.",
"title": ""
},
{
"docid": "4cc8b430fc70931a21015c800936001d",
"text": "Nowadays, there is a significant increase in the number of Bioinformatics tools and databases. Researchers from various interdisciplinary fields need to use these tools. Usability is an important quality of software in general, and bioinformatics tools in particular. Improving the usability of bioinformatics tools allows users to use the tool to its fullest potential. In this paper, we evaluate the usability of two online bioinformatics tools Ori-Finder 1 and Ori-Finder 2 in terms of efficiency, effectiveness, and satisfaction. The evaluation focuses on investigating how easily and successfully can users use Ori-Finder1 and Ori-Finder 2 to find the origin of replication in Bacterial and Archaeal genomes. To the best of our knowledge, the usability of these two tools has not been studied before. Twelve participants were recruited from four user groups. The average tasks completion times were compared. Many usability issues were identified by users of bioinformatics tools. Based on our results, we list recommendations for better design of bioinformatics tools.",
"title": ""
},
{
"docid": "83e50a2c76217f60057d8bf680a12b92",
"text": "[1] Luo, Z. X., Zhou, X. C., David XianFeng, G. U. (2014). From a projective invariant to some new properties of algebraic hypersurfaces.Science China Mathematics, 57(11), 2273-2284. [2] Fan, B., Wu, F., Hu, Z. (2010). Line matching leveraged by point correspondences. IEEE Conference on Computer Vision & Pattern Recognition (Vol.238, pp.390-397). [3] Fan, B., Wu, F., & Hu, Z. (2012). Robust line matching through line–point invariants. Pattern Recognition, 45(2), 794-805. [4] López, J., Santos, R., Fdez-Vidal, X. R., & Pardo, X. M. (2015). Two-view line matching algorithm based on context and appearance in low-textured images. Pattern Recognition, 48(7), 2164-2184. Dalian University of Technology Qi Jia, Xinkai Gao, Xin Fan*, Zhongxuan Luo, Haojie Li,and Ziyao Chen Novel Coplanar Line-points Invariants for Robust Line Matching Across Views",
"title": ""
},
{
"docid": "f693b26866ca8eb2a893dead7aa0fb21",
"text": "This paper deals with response signals processing in eddy current non-destructive testing. Non-sinusoidal excitation is utilized to drive eddy currents in a conductive specimen. The response signals due to a notch with variable depth are calculated by numerical means. The signals are processed in order to evaluate the depth of the notch. Wavelet transformation is used for this purpose. Obtained results are presented and discussed in this paper. Streszczenie. Praca dotyczy sygnałów wzbudzanych przy nieniszczącym testowaniu za pomocą prądów wirowych. Przy pomocy symulacji numerycznych wyznaczono sygnały odpowiedzi dla niesinusoidalnych sygnałów wzbudzających i defektów o różnej głębokości. Celem symulacji jest wyznaczenie zależności pozwalającej wyznaczyć głębokość defektu w zależności od odbieranego sygnału. W artykule omówiono wykorzystanie do tego celu transformaty falkowej. (Analiza falkowa impulsowych prądów wirowych)",
"title": ""
},
{
"docid": "64658ebc4a86b0ff8fce6de0f6e1f32f",
"text": "A huge upheaval emerges from the transition to autonomous vehicles in the domain of road vehicles, ongoing with a change in the vehicle architecture. Many sensors and Electronic Control Units are added to the current vehicle architecture and further safety requirements like reliability become even more necessary. In this paper we present a potential evolution of the Electrical/Electronic-Architecture, including a Zone Architecture, to enable future functionality. We reveal the impact on the communication network concerning these architectures and present a potential communication technology to facilitate such architectures.",
"title": ""
},
{
"docid": "63da0b3d1bc7d6aedd5356b8cdf67b24",
"text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.",
"title": ""
},
{
"docid": "59beebc51416063d00f6a1ff8032feff",
"text": "In movies, film stars portray another identity or obfuscate their identity with the help of silicone/latex masks. Such realistic masks are now easily available and are used for entertainment purposes. However, their usage in criminal activities to deceive law enforcement and automatic face recognition systems is also plausible. Therefore, it is important to guard biometrics systems against such realistic presentation attacks. This paper introduces the first-of-its-kind silicone mask attack database which contains 130 real and attacked videos to facilitate research in developing presentation attack detection algorithms for this challenging scenario. Along with silicone mask, there are several other presentation attack instruments that are explored in literature. The next contribution of this research is a novel multilevel deep dictionary learning-based presentation attack detection algorithm that can discern different kinds of attacks. An efficient greedy layer by layer training approach is formulated to learn the deep dictionaries followed by SVM to classify an input sample as genuine or attacked. Experimental are performed on the proposed SMAD database, some samples with real world silicone mask attacks, and four existing presentation attack databases, namely, replay-attack, CASIA-FASD, 3DMAD, and UVAD. The results show that the proposed algorithm yields better performance compared with state-of-the-art algorithms, in both intra-database and cross-database experiments.",
"title": ""
},
{
"docid": "91eda0e2f9ef0e2ed87c5135c0061dfd",
"text": "We detail the design, implementation, and an initial evaluation of a virtual reality education and entertainment (edutainment) application called Virtual Environment Interactions (VEnvI). VEnvI is an application in which students learn computer science concepts through the process of choreographing movement for a virtual character using a fun and intuitive interface. In this exploratory study, 54 participants as part of a summer camp geared towards promoting participation of women in science and engineering programmatically crafted a dance performance for a virtual human. A subset of those students participated in an immersive embodied interaction metaphor in VEnvI. In creating this metaphor that provides first-person, embodied experiences using self-avatars, we seek to facilitate engagement, excitement and interest in computational thinking. We qualitatively and quantitatively evaluated the extent to which the activities of the summer camp, programming the dance moves, and the embodied interaction within VEnvI facilitated students' edutainment, presence, interest, excitement, and engagement in computing, and the potential to alter their perceptions of computing and computer scientists. Results indicate that students enjoyed the experience and successfully engaged the virtual character in the immersive embodied interaction, thus exhibiting high telepresence and social presence. Students also showed increased interest and excitement regarding the computing field at the end of their summer camp experience using VEnvI.",
"title": ""
},
{
"docid": "2ea99ae4dd94095e7f758353d35839ca",
"text": "An increasing number of companies rely on distributed data storage and processing over large clusters of commodity machines for critical business decisions. Although plain MapReduce systems provide several benefits, they carry certain limitations that impact developer productivity and optimization opportunities. Higher level programming languages plus conceptual data models have recently emerged to address such limitations. These languages offer a single machine programming abstraction and are able to perform sophisticated query optimization and apply efficient execution strategies. In massively distributed computation, data shuffling is typically the most expensive operation and can lead to serious performance bottlenecks if not done properly. An important optimization opportunity in this environment is that of judicious placement of repartitioning operators and choice of alternative implementations. In this paper we discuss advanced partitioning strategies, their implementation, and how they are integrated in the Microsoft Scope system. We show experimentally that our approach significantly improves performance for a large class of real-world jobs.",
"title": ""
},
{
"docid": "1033cea5d964b55bda87492574124ce8",
"text": "Local Rigidity Test Problem. Dense 3D reconstruction of a dynamic foreground subject from a pair of unsynchronized videos with unknown temporal overlap. Challenges: 1. How to identify temporal overlap in terms of estimated dynamic geometry . 2. How to robustly estimate geometry without knowledge of temporal overlap. Key Ideas: 1. Define the cardinality of the maximal set of locally rigid feature tracks as a measure of spatio-temporal consistency of a pair of video sub-sequences. 2. Develop a closed-loop track correspondence refinement process to find the maximal set of rigid tracks. Contributions: 1. We exploit the correlation between temporal alignment errors and geometric estimation errors. 2. We provide a joint solution to the geometry estimation and temporal the video alignment problems. 3. Model-free (i.e. data-driven) framework with wide applicability.",
"title": ""
},
{
"docid": "12c7c925591f32300528d7b2ef094bdb",
"text": "Anxiety is a complex disorder; thus, its mechanisms remain unclear. Zebrafish (Danio rerio) are a promising pharmacological model for anxiety research. Light/dark preference test is a behaviorally validated measure of anxiety in zebrafish; however, it requires pharmacological validation. We sought to evaluate the sensitivity of the light/dark preference test in adult zebrafish by immersing them in drug solutions containing clonazepam, buspirone, imipramine, fluoxetine, paroxetine, haloperidol, risperidone, propranolol, or ethanol. The time spent in the dark environment, the latency time to first crossing, and the number of midline crossings were analyzed. Intermediate concentrations of clonazepam administered for 600s decreased the time spent in the dark and increased locomotor activity. Buspirone reduced motor activity. Imipramine and fluoxetine increased time spent in the dark and the first latency, and decreased the number of alternations. Paroxetine did not alter the time in the dark; however, it increased the first latency time and decreased locomotor activity. Haloperidol decreased the time spent in the dark at low concentrations. Risperidone and propranolol did not change any parameters. Ethanol reduced the time spent in the dark and increased the number of crossings at intermediate concentrations. These results corroborate the previous work using intraperitoneal drug administration in zebrafish and rodents, suggesting that water drug delivery in zebrafish can effectively be used as an animal anxiety model.",
"title": ""
},
{
"docid": "8e19c3513be332705f4e2bf5a8aa4429",
"text": "The introduction of crowdsourcing offers numerous business opportunities. In recent years, manifold forms of crowdsourcing have emerged on the market -- also in logistics. Thereby, the ubiquitous availability and sensor-supported assistance functions of mobile devices support crowdsourcing applications, which promotes contextual interactions between users at the right place at the right time. This paper presents the results of an in-depth-analysis on crowdsourcing in logistics in the course of ongoing research in the field of location-based crowdsourcing (LBCS). This paper analyzes LBCS for both, 'classic' logistics as well as 'information' logistics. Real-world examples of crowdsourcing applications are used to underpin the two evaluated types of logistics using crowdsourcing. Potential advantages and challenges of logistics with the crowd ('crowd-logistics') are discussed. Accordingly, this paper aims to provide the necessary basis for a novel interdisciplinary research field.",
"title": ""
},
{
"docid": "1c0a3ecb1b794b9112f00dca13793727",
"text": "We present a mechanism for separating cells based on size and deformability using microfluidic ratchets created using micrometer-scale funnel constrictions. The force required to deform individual cells through such constrictions is directionally asymmetric, enabling rectified transport from oscillatory flow of the bulk fluid. Combining ratcheting with simple filtration enables cell separation based on size and deformability. Based on this concept, we developed a microfluidic device using a 2D matrix of funnel constrictions. We demonstrate highly selective separation of two cell types while retaining viability, study the effect of oscillation flow pressure, and confirm the irreversible nature of the ratcheting process.",
"title": ""
},
{
"docid": "c467edcb0c490034776ba2dc2cde9d9e",
"text": "BACKGROUND\nPostoperative complications of blepharoplasty range from cutaneous changes to vision-threatening emergencies. Some of these can be prevented with careful preoperative evaluation and surgical technique. When complications arise, their significance can be diminished by appropriate management. This article addresses blepharoplasty complications based on the typical postoperative timeframe when they are encountered.\n\n\nMETHODS\nThe authors conducted a review article of major blepharoplasty complications and their treatment.\n\n\nRESULTS\nComplications within the first postoperative week include corneal abrasions and vision-threatening retrobulbar hemorrhage; the intermediate period (weeks 1 through 6) addresses upper and lower eyelid malpositions, strabismus, corneal exposure, and epiphora; and late complications (>6 weeks) include changes in eyelid height and contour along with asymmetries, scarring, and persistent edema.\n\n\nCONCLUSIONS\nA thorough knowledge of potential complications of blepharoplasty surgery is necessary for the practicing aesthetic surgeon. Within this article, current concepts and relevant treatment strategies are reviewed with the use of the most recent and/or appropriate peer-reviewed literature available.",
"title": ""
}
] |
scidocsrr
|
951c4f3ab88c5f86146d626989b6b688
|
Community Evolution in Temporal Networks
|
[
{
"docid": "25ccaa5a71d0a3f46296c59328e0b9b5",
"text": "Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.",
"title": ""
},
{
"docid": "81cb6b35dcf083fea3973f4ee75a9006",
"text": "We propose frameworks and algorithms for identifying communities in social networks that change over time. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Instead, we propose an optimization-based approach for modeling dynamic community structure. We prove that finding the most explanatory community structure is NP-hard and APX-hard, and propose algorithms based on dynamic programming, exhaustive search, maximum matching, and greedy heuristics. We demonstrate empirically that the heuristics trace developments of community structure accurately for several synthetic and real-world examples.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "c37bfee87d4fd0a011fb6a132c3e779b",
"text": "Increasingly, methods to identify community structure in networks have been proposed which allow groups to overlap. These methods have taken a variety of forms, resulting in a lack of consensus as to what characteristics overlapping communities should have. Furthermore, overlapping community detection algorithms have been justified using intuitive arguments, rather than quantitative observations. This lack of consensus and empirical justification has limited the adoption of methods which identify overlapping communities. In this text, we distil from previous literature a minimal set of axioms which overlapping communities should satisfy. Additionally, we modify a previously published algorithm, Iterative Scan, to ensure that these properties are met. By analyzing the community structure of a large blog network, we present both structural and attribute based verification that overlapping communities naturally and frequently occur.",
"title": ""
},
{
"docid": "77e5724ff3b8984a1296731848396701",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: V.Nicosia@qmul.ac.uk Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: m.musolesi@cs.bham.ac.uk G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
}
] |
[
{
"docid": "eec1f1cdb7b4adfec71f5917b077661a",
"text": "Digital games have become a remarkable cultural phenomenon in the last ten years. The casual games sector especially has been growing rapidly in the last few years. However, there is no clear view on what is \"casual\" in games cultures and the area has not previously been rigorously studied. In the discussions on casual games, \"casual\" is often taken to refer to the player, the game or the playing style, but other factors such as business models and accessibility are also considered as characteristic of \"casual\" in games. Views on casual vary and confusion over different meanings can lead to paradoxical readings, which is especially the case when \"casual gamer\" is taken to mean both \"someone who plays casual games\" and someone who \"plays casually\". In this article we will analyse the ongoing discussion by providing clarification of the different meanings of casual and a framework for an overall understanding of casual in the level of expanded game experience.",
"title": ""
},
{
"docid": "f51bf455134a2aa80ba74e161b1de1e1",
"text": "Online reviews are often our first port of call when considering products and purchases online. When evaluating a potential purchase, we may have a specific query in mind, e.g. ‘will this baby seat fit in the overhead compartment of a 747?’ or ‘will I like this album if I liked Taylor Swift’s 1989?’. To answer such questions we must either wade through huge volumes of consumer reviews hoping to find one that is relevant, or otherwise pose our question directly to the community via a Q/A system. In this paper we hope to fuse these two paradigms: given a large volume of previously answered queries about products, we hope to automatically learn whether a review of a product is relevant to a given query. We formulate this as a machine learning problem using a mixture-of-experts-type framework—here each review is an ‘expert’ that gets to vote on the response to a particular query; simultaneously we learn a relevance function such that ‘relevant’ reviews are those that vote correctly. At test time this learned relevance function allows us to surface reviews that are relevant to new queries on-demand. We evaluate our system, Moqa, on a novel corpus of 1.4 million questions (and answers) and 13 million reviews. We show quantitatively that it is effective at addressing both binary and open-ended queries, and qualitatively that it surfaces reviews that human evaluators consider to be relevant.",
"title": ""
},
{
"docid": "57dfe225cbe03c87b509c327325b36f5",
"text": "Social communications and collaborations have been ranked as one of key information technology trends. Social networks are social collaboration tools that have shown clear benefits for education in terms of providing informal learning, joining students to communities, exchanging support in studying, and promoting relationships in classes. Therefore, major objectives of this paper are to investigate factors affecting Facebook acceptance as the computer-mediated communication for courses, to explore different levels of the factors’ influence, and to guide instructors for effectively applying social networks in education. The survey approach using questionnaires along with multiple regressions were applied to reveal results. The results show that two perception factor: perceived usefulness, perceived ease of use, and one instructor factor: instructor characteristics are determinants of the course Facebook pages adoption, but student factors which are student characteristics and past behavior are not determinants. In terms of theoretical contribution, this research has extended TAM and TPB models with some aspects of teachers and learners.",
"title": ""
},
{
"docid": "8c0f455b31187a30e0b98d30dcb3adeb",
"text": "Dataset bias remains a significant barrier towards solving real world computer vision tasks. Though deep convolutional networks have proven to be a competitive approach for image classification, a question remains: have these models have solved the dataset bias problem? In general, training or fine-tuning a state-ofthe-art deep model on a new domain requires a significant amount of data, which for many applications is simply not available. Transfer of models directly to new domains without adaptation has historically led to poor recognition performance. In this paper, we pose the following question: is a single image dataset, much larger than previously explored for adaptation, comprehensive enough to learn general deep models that may be effectively applied to new image domains? In other words, are deep CNNs trained on large amounts of labeled data as susceptible to dataset bias as previous methods have been shown to be? We show that a generic supervised deep CNN model trained on a large dataset reduces, but does not remove, dataset bias. Furthermore, we propose several methods for adaptation with deep models that are able to operate with little (one example per category) or no labeled domain specific data. Our experiments show that adaptation of deep models on benchmark visual domain adaptation datasets can provide a significant performance boost.",
"title": ""
},
{
"docid": "1d6e20debb1fc89079e0c5e4861e3ca4",
"text": "BACKGROUND\nThe aims of this study were to identify the independent factors associated with intermittent addiction and addiction to the Internet and to examine the psychiatric symptoms in Korean adolescents when the demographic and Internet-related factors were controlled.\n\n\nMETHODS\nMale and female students (N = 912) in the 7th-12th grades were recruited from 2 junior high schools and 2 academic senior high schools located in Seoul, South Korea. Data were collected from November to December 2004 using the Internet-Related Addiction Scale and the Symptom Checklist-90-Revision. A total of 851 subjects were analyzed after excluding the subjects who provided incomplete data.\n\n\nRESULTS\nApproximately 30% (n = 258) and 4.3% (n = 37) of subjects showed intermittent Internet addiction and Internet addiction, respectively. Multivariate logistic regression analysis showed that junior high school students and students having a longer period of Internet use were significantly associated with intermittent addiction. In addition, male gender, chatting, and longer Internet use per day were significantly associated with Internet addiction. When the demographic and Internet-related factors were controlled, obsessive-compulsive and depressive symptoms were found to be independently associated factors for intermittent addiction and addiction to the Internet, respectively.\n\n\nCONCLUSIONS\nStaff working in junior or senior high schools should pay closer attention to those students who have the risk factors for intermittent addiction and addiction to the Internet. Early preventive intervention programs are needed that consider the individual severity level of Internet addiction.",
"title": ""
},
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
},
{
"docid": "5705022b0a08ca99d4419485f3c03eaa",
"text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.",
"title": ""
},
{
"docid": "4aca364133eb0630c3b97e69922d07b7",
"text": "Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and are hypothesized to explain the matter-antimatter asymmetry that dominates our universe. Definitive tests of this conjecture require a detailed understanding of neutrino interactions with a variety of nuclei. Many measurements of interest depend on vertex reconstruction — finding the origin of a neutrino interaction using data from the detector, which can be represented as images. Traditionally, this has been accomplished by utilizing methods that identify the tracks coming from the interaction. However, these methods are not ideal for interactions where an abundance of tracks and cascades occlude the vertex region. Manual algorithm engineering to handle these challenges is complicated and error prone. Deep learning extracts rich, semantic features directly from raw data, making it a promising solution to this problem. In this work, deep learning models are presented that classify the vertex location in regions meaningful to the domain scientists improving their ability to explore more complex interactions.",
"title": ""
},
{
"docid": "d302bfb7c2b95def93525050016ac07c",
"text": "Face recognition remains a challenge today as recognition performance is strongly affected by variability such as illumination, expressions and poses. In this work we apply Convolutional Neural Networks (CNNs) on the challenging task of both 2D and 3D face recognition. We constructed two CNN models, namely CNN-1 (two convolutional layers) and CNN-2 (one convolutional layer) for testing on 2D and 3D dataset. A comprehensive parametric study of two CNN models on face recognition is represented in which different combinations of activation function, learning rate and filter size are investigated. We find that CNN-2 has a better accuracy performance on both 2D and 3D face recognition. Our experimental results show that an accuracy of 85.15% was accomplished using CNN-2 on depth images with FRGCv2.0 dataset (4950 images with 557 objectives). An accuracy of 95% was achieved using CNN-2 on 2D raw image with the AT&T dataset (400 images with 40 objectives). The results indicate that the proposed CNN model is capable to handle complex information from facial images in different dimensions. These results provide valuable insights into further application of CNN on 3D face recognition.",
"title": ""
},
{
"docid": "1fee36b3d0e796273eaa33b250930997",
"text": "Developers spend a lot of time searching for the root causes of software failures. For this, they traditionally try to reproduce those failures, but unfortunately many failures are so hard to reproduce in a test environment that developers spend days or weeks as ad-hoc detectives. The shortcomings of many solutions proposed for this problem prevent their use in practice.\n We propose failure sketching, an automated debugging technique that provides developers with an explanation (\"failure sketch\") of the root cause of a failure that occurred in production. A failure sketch only contains program statements that lead to the failure, and it clearly shows the differences between failing and successful runs; these differences guide developers to the root cause. Our approach combines static program analysis with a cooperative and adaptive form of dynamic program analysis.\n We built Gist, a prototype for failure sketching that relies on hardware watchpoints and a new hardware feature for extracting control flow traces (Intel Processor Trace). We show that Gist can build failure sketches with low overhead for failures in systems like Apache, SQLite, and Memcached.",
"title": ""
},
{
"docid": "a6f1480f52d142a013bb88a92e47b0d7",
"text": "An isolated switched high step up boost DC-DC converter is discussed in this paper. The main objective of this paper is to step up low voltage to very high voltage. This paper mainly initiates at boosting a 30V DC into 240V DC. The discussed converter benefits from the continuous input current. Usually, step-up DC-DC converters are suitable for input whose voltage level is very low. The circuital design comprises of four main stages. Firstly, an impedance network which is used to boost the low input voltage. Secondly a switching network which is used to boost the input voltage then an isolation transformer which is used to provide higher boosting ability and finally a voltage multiplier rectifier which is used to rectify the secondary voltage of the transformer. No switching deadtime is required, which increases the reliability of the converter. Comparing with the existing step-up topologies indicates that this new design is hybrid, portable, higher power density and the size of the whole system is also reduced. The principles as well as operations were analysed and experimentally worked out, which provides a higher efficiency. KeywordImpedance Network, Switching Network, Isolation Transformer, Voltage Multiplier Rectifier, MicroController, DC-DC Boost Converter __________________________________________________________________________________________________",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "7f16ed65f6fd2bcff084d22f76740ff3",
"text": "The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "dca8b7f7022a139fc14bddd1af2fea49",
"text": "In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3% and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7% and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible “if ... then ...” rules.",
"title": ""
},
{
"docid": "4b860718c9939e1dbdb2e157e9a208e7",
"text": "The design and construction of a repetitive high-current pulsed accelerator-TPG700 is described in this paper. The accelerator consists of a Tesla transformer with 40 ohm build-in coaxial pulse forming line. The triggered high-pressure switch of TPG700 has the capability of conducting current of 17.5kA in 35ns' duration at 100 pps. The transformer was designed to operate at 1.4MV, when its primary capacitors were charged to approximately 1000V. Under the working state of 100pps, the jitter of breakdown of the switch voltages is lower than 1% on average. To enhance the overall efficiency of the pulser, resonant charging technology based on IGBTs was utilized. As the experimental results indicate, the total efficiency of the pulser, when measured on matched dummy load, is close to 75%. The experimental results indicate that in matching case, the output of 700kV, 17kA for 40Ω resistive load is obtained. Moreover, some experiments such as long lifetime cathode testing and high power microwave (HPM) generation using backward oscillator (BWO) have been conducted on TPG700.",
"title": ""
},
{
"docid": "5eaac9e4945b72c93b1dbe3689c5de9f",
"text": "Ahstract-A novel TRM calibration procedure aimed to improve the quality of on-wafer S-parameter measurement, especially in mm-wave frequency band, has been proposed. This procedure is based on active reverse signal injections to improve the accuracy of the raw thru s-parameter measurement. This calibration method can effectively improve the S-parameter measurement quality at mm-wave frequency and hence improve the modelling accuracy. This new optimized calibration method eliminates the need of utilizing complex and expensive loadpull system or post calibration optimization algorithms, and can be easily implemented in modelling extraction process or further implemented into other LRL/TRL based calibration algorithms. Finally, this proposed method has been tested on a real measurement system over a 16nm FinFET CMOS device to test its validity.",
"title": ""
},
{
"docid": "41b3b48c10753600e36a584003eebdd6",
"text": "This paper deals with reliability problems of common types of generators in hard conditions. It shows possibilities of construction changes that should increase the machine reliability. This contribution is dedicated to the study of brushless alternator for automotive industry. There are described problems with usage of common types of alternators and main benefits and disadvantages of several types of brushless alternators.",
"title": ""
},
{
"docid": "16125310a488e0946075264c11e50720",
"text": "A 90-W peak-power 2.14-GHz improved GaN outphasing amplifier with 50.5% average efficiency for wideband code division multiple access (W-CDMA) signals is presented. Independent control of the branch amplifiers by two in-phase/quadrature modulators enables optimum outphasing and input power leveling, yielding significant improvements in gain, efficiency, and linearity. In deep-power backoff operation, the outphasing angle of the branch amplifiers is kept constant below a certain power level. This results in class-B operation for the very low output power levels, yielding less reactive loading of the output stages, and therefore, improved efficiency in power backoff operation compared to the classical outphasing amplifiers. Based on these principles, the optimum design parameters and input signal conditioning are discussed. The resulting theoretical maximum achievable average efficiency for W-CDMA signals is presented. Experimental results support the foregoing theory and show high efficiency over a large bandwidth, while meeting the linearity specifications using low-cost low-complexity memoryless pre-distortion. These properties make this amplifier concept an interesting candidate for future multiband base-station implementations.",
"title": ""
},
{
"docid": "301373338fe35426f5186f400f63dbd3",
"text": "OBJECTIVE\nThis paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds.\n\n\nMETHODS AND MATERIAL\nReview of the current medical and technological literature using Pubmed and personal experience.\n\n\nRESULTS\nThe study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms…\n\n\nCONCLUSION\nThe next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.",
"title": ""
}
] |
scidocsrr
|
881147bbfc9ba324f0ebecf010dec1e3
|
Characterizing pseudoentropy and simplifying pseudorandom generator constructions
|
[
{
"docid": "7259530c42f4ba91155284ce909d25a6",
"text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.",
"title": ""
}
] |
[
{
"docid": "0788352b51fb48c27ca14110fdaee8a9",
"text": "As a complement to high-layer encryption techniques, physical layer security has been widely recognized as a promising way to enhance wireless security by exploiting the characteristics of wireless channels, including fading, noise, and interference. In order to enhance the received signal power at legitimate receivers and impair the received signal quality at eavesdroppers simultaneously, multiple-antenna techniques have been proposed for physical layer security to improve secrecy performance via exploiting spatial degrees of freedom. This paper provides a comprehensive survey on various multiple-antenna techniques in physical layer security, with an emphasis on transmit beamforming designs for multiple-antenna nodes. Specifically, we provide a detailed investigation on multiple-antenna techniques for guaranteeing secure communications in point-to-point systems, dual-hop relaying systems, multiuser systems, and heterogeneous networks. Finally, future research directions and challenges are identified.",
"title": ""
},
{
"docid": "70f8d5a6d6ff36dd669403d7865bab94",
"text": "Addressing the problem of information overload, automatic multi-document summarization (MDS) has been widely utilized in the various real-world applications. Most of existing approaches adopt term-based representation for documents which limit the performance of MDS systems. In this paper, we proposed a novel unsupervised pattern-enhanced topic model (PETMSum) for the MDS task. PETMSum combining pattern mining techniques with LDA topic modelling could generate discriminative and semantic rich representations for topics and documents so that the most representative, non-redundant, and topically coherent sentences can be selected automatically to form a succinct and informative summary. Extensive experiments are conducted on the data of document understanding conference (DUC) 2006 and 2007. The results prove the effectiveness and efficiency of our proposed approach.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "0186c053103d06a8ddd054c3c05c021b",
"text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.",
"title": ""
},
{
"docid": "4cfe3df75371f28485fe74c099fd75e7",
"text": "This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs) are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%.",
"title": ""
},
{
"docid": "863c806d29c15dd9b9160eae25316dfc",
"text": "This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.",
"title": ""
},
{
"docid": "876ee0ecb1b6196a19fb2ab85b86f19d",
"text": "This paper presents new experimental data and an improved mechanistic model for the Gas-Liquid Cylindrical Cyclone (GLCC) separator. The data were acquired utilizing a 3” ID laboratory-scale GLCC, and are presented along with a limited number of field data. The data include measurements of several parameters of the flow behavior and the operational envelope of the GLCC. The operational envelope defines the conditions for which there will be no liquid carry-over or gas carry-under. The developed model enables the prediction of the hydrodynamic flow behavior in the GLCC, including the operational envelope, equilibrium liquid level, vortex shape, velocity and holdup distributions and pressure drop across the GLCC. The predictions of the model are compared with the experimental data. These provide the state-of-the-art for the design of GLCC’s for the industry. Introduction The gas-liquid separation technology currently used by the petroleum industry is mostly based on the vessel-type separator which is large, heavy and expensive to purchase and operate. This technology has not been substantially improved over the last several decades. In recent years the industry has shown interest in the development and application of alternatives to the vessel-type separator. One such alternative is the use of compact or in-line separators, such as the Gas-Liquid Cylindrical Cyclone (GLCC) separator. As can be seen in Fig. 1, the GLCC is an emerging class of vertical compact separators, as compared to the very mature technology of the vessel-type separator. D ev el op m en t GLCC’s FWKO Cyclones Emerging Gas Cyclones Conventional Horizontal and Vertical Separators Growth Finger Storage Slug Catcher Vessel Type Slug Catcher",
"title": ""
},
{
"docid": "80ac8b65b7c125fa98537be327f5f200",
"text": "Occupational science is an emerging basic science which supports the practice of occupational therapy. Its roots in the rich traditions of occupational therapy are explored and its current configuration is introduced. Specifications which the science needs to meet as it is further developed and refined are presented. Compatible disciplines and research approaches are identified. example's of basic science research questions and their potential contributions to occupational therapy practice are suggested.",
"title": ""
},
{
"docid": "806a83d17d242a7fd5272862158db344",
"text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.",
"title": ""
},
{
"docid": "385b573c33a9e4f81afd966c9277c0c1",
"text": "According to American College of Rheumatology fibromyalgia syndrome (FMS) is a common health problem characterized by widespread pain and tenderness. The pain and tenderness, although chronic, present a tendency to fluctuate both in intensity and location around the body. Patients with FMS experience fatigue and often have sleep disorders. It is estimated that FMS affects two to four percent of the general population. It is most common in women, though it can also occur in men. FMS most often first occur in the middle adulthood, but it can start as early as in the teen years or in the old age. The causes of FMS are unclear. Various infectious agents have recently been linked with the development of FMS. Some genes are potentially linked with an increased risk of developing FMS and some other health problems, which are common comorbidities to FMS. It is the genes that determine individual sensitivity and reaction to pain, quality of the antinociceptive system and complex biochemistry of pain sensation. Diagnosis and therapy may be complex and require cooperation of many specialists. Rheumatologists often make the diagnosis and differentiate FMS with other disorders from the rheumatoid group. FMS patients may also require help from the Psychiatric Clinic (Out-Patients Clinic) due to accompanying mental problems. As the pharmacological treatment options are limited and only complex therapy gives relatively good results, the treatment plan should include elements of physical therapy.",
"title": ""
},
{
"docid": "d2cf6c5241e2169c59cfbb39bf3d09bb",
"text": "As remote exploits further dwindle and perimeter defenses become the standard, remote client-side attacks are becoming the standard vector for attackers. Modern operating systems have quelled the explosion of client-side vulnerabilities using mitigation techniques such as data execution prevention (DEP) and address space layout randomization (ASLR). This work illustrates two novel techniques to bypass these mitigations. The two techniques leverage the attack surface exposed by the script interpreters commonly accessible within the browser. The first technique, pointer inference, is used to find the memory address of a string of shellcode within the Adobe Flash Player's ActionScript interpreter despite ASLR. The second technique, JIT spraying, is used to write shellcode to executable memory, bypassing DEP protections, by leveraging predictable behaviors of the ActionScript JIT compiler. Previous attacks are examined and future research directions are discussed.",
"title": ""
},
{
"docid": "ff4e26c7770898dbd753e33c1ced1a1b",
"text": "Large mammals, including humans, save much of the energy needed for running by means of elastic structures in their legs and feet1,2. Kinetic and potential energy removed from the body in the first half of the stance phase is stored briefly as elastic strain energy and then returned in the second half by elastic recoil. Thus the animal runs in an analogous fashion to a rubber ball bouncing along. Among the elastic structures involved, the tendons of distal leg muscles have been shown to be important2,3. Here we show that the elastic properties of the arch of the human foot are also important.",
"title": ""
},
{
"docid": "d5bc87dc8c93d2096f048437315e6634",
"text": "The diversity of an ensemble can be calculated in a variety of ways. Here a diversity metric and a means for altering the diversity of an ensemble, called “thinning”, are introduced. We experiment with thinning algorithms evaluated on ensembles created by several techniques on 22 publicly available datasets. When compared to other methods, our percentage correct diversity measure algorithm shows a greater correlation between the increase in voted ensemble accuracy and the diversity value. Also, the analysis of different ensemble creation methods indicates each has varying levels of diversity. Finally, the methods proposed for thinning again show that ensembles can be made smaller without loss in accuracy. Information Fusion Journal",
"title": ""
},
{
"docid": "e0f7c82754694084c6d05a2d37be3048",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "2fb5f1e17e888049bd0f506f3a37f377",
"text": "While the Semantic Web has evolved to support the meaningful exchange of heterogeneous data through shared and controlled conceptualisations, Web 2.0 has demonstrated that large-scale community tagging sites can enrich the semantic web with readily accessible and valuable knowledge. In this paper, we investigate the integration of a movies folksonomy with a semantic knowledge base about usermovie rentals. The folksonomy is used to enrich the knowledge base with descriptions and categorisations of movie titles, and user interests and opinions. Using tags harvested from the Internet Movie Database, and movie rating data gathered by Netflix, we perform experiments to investigate the question that folksonomy-generated movie tag-clouds can be used to construct better user profiles that reflect a user’s level of interest in different kinds of movies, and therefore, provide a basis for prediction of their rating for a previously unseen movie.",
"title": ""
},
{
"docid": "6dbc238948d555578039ed268f3d4f51",
"text": "Chidi Okafor, David M. Ward, Michele H. Mokrzycki, Robert Weinstein, Pamela Clark, and Rasheed A. Balogun* Department of Medicine, Division of Nephrology, University of Virginia Health System, Charlottesville, Virginia Department of Medicine, University of California, San Diego, California Department of Medicine, Albert Einstein College of Medicine, Bronx, New York Departments of Medicine and Pathology, University of Massachusetts, Amherst, Massachusetts Department of Pathology, University of Virginia, Charlottesville, Virginia",
"title": ""
},
{
"docid": "5157063545b7ec7193126951c3bdb850",
"text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.",
"title": ""
},
{
"docid": "f6e8f2f990ca60a5b659c1c7a19d0638",
"text": "OBJECTIVE\nTo develop an understanding of the stability of mental health during imprisonment through review of existing research evidence relating physical prison environment to mental state changes in prisoners.\n\n\nMETHOD\nA systematic literature search was conducted looking at changes in mental state and how this related to various aspects of imprisonment and the prison environment.\n\n\nRESULTS\nFifteen longitudinal studies were found, and from these, three broad themes were delineated: being imprisoned and aspects of the prison regime; stage of imprisonment and duration of sentence; and social density. Reception into prison results in higher levels of psychiatric symptoms that seem to improve over time; otherwise, duration of imprisonment appears to have no significant impact on mental health. Regardless of social density, larger prisons are associated with poorer mental state, as are extremes of social density.\n\n\nCONCLUSION\nThere are large gaps in the literature relating prison environments to changes in mental state; in particular, high-quality longitudinal studies are needed. Existing research suggests that although entry to prison may be associated with deterioration in mental state, it tends to improve with time. Furthermore, overcrowding, ever more likely as prison populations rise, is likely to place a particular burden on mental health services.",
"title": ""
}
] |
scidocsrr
|
1d2655ff7197191d88dcd901e081171c
|
Security Assessment of Code Obfuscation Based on Dynamic Monitoring in Android Things
|
[
{
"docid": "529e132a37f9fb37ddf04984236f4b36",
"text": "The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
}
] |
[
{
"docid": "f9b6662dc19c47892bb7b95c5b7dc181",
"text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.",
"title": ""
},
{
"docid": "149595fcd31fd2ddbf7c6d48ca6339dc",
"text": "What factors underlie the adoption dynamics of ecommerce technologies among users in developing countries? Even though the internet promised to be the great equalizer, the nuanced variety of conditions and contingencies that shape user adoption of ecommerce technologies has received little scrutiny. Building on previous research on technology adoption, the paper proposes a global information technology (IT) adoption model. The model includes antecedents of performance expectancy, social influence, and technology opportunism and investigates the crucial influence of facilitating conditions. The proposed model is tested using data from 172 technology users from 37 countries, collected over a 1-year period. The findings suggest that in developing countries, facilitating conditions play a critical moderating role in understanding actual ecommerce adoption, especially when in tandem with technological opportunism. Altogether, the paper offers a preliminary scrutiny of the mechanics of ecommerce adoption in developing countries.",
"title": ""
},
{
"docid": "7eec9c40d8137670a88992d40ef52101",
"text": "Nowadays, most nurses, pre- and post-qualification, will be required to undertake a literature review at some point, either as part of a course of study, as a key step in the research process, or as part of clinical practice development or policy. For student nurses and novice researchers it is often seen as a difficult undertaking. It demands a complex range of skills, such as learning how to define topics for exploration, acquiring skills of literature searching and retrieval, developing the ability to analyse and synthesize data as well as becoming adept at writing and reporting, often within a limited time scale. The purpose of this article is to present a step-by-step guide to facilitate understanding by presenting the critical elements of the literature review process. While reference is made to different types of literature reviews, the focus is on the traditional or narrative review that is undertaken, usually either as an academic assignment or part of the research process.",
"title": ""
},
{
"docid": "628c8b906e3db854ea92c021bb274a61",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "a3d95604c143f1cd511fd62fe62bb4f4",
"text": "We propose a new method for unconstrained optimization of a s mooth and strongly convex function, which attains the optimal rate of convergence of N esterov’s accelerated gradient descent. The new algorithm has a simple geometric interpret ation, loosely inspired by the ellipsoid method. We provide some numerical evidence that t he new method can be superior to Nesterov’s accelerated gradient descent.",
"title": ""
},
{
"docid": "6f0f6bf051ff36907b3184501cecbf19",
"text": "American divorce rates rose from the 1950s to the 1970s, peaked around 1980, and have fallen ever since. The mean age at marriage also substantially increased after 1970. Using data from the Survey of Income and Program Participation, 1979 National Longitudinal Survey of Youth, and National Survey of Family Growth, I explore the extent to which the rise in age at marriage can explain the rapid decrease in divorce rates for cohorts marrying after 1980. Three different empirical approaches all suggest that the increase in women’s age at marriage was the main proximate cause of the fall in divorce. ∗Email: drotz@mathematica-mpr.com. I would like to thank Roland Fryer, Claudia Goldin, and Larry Katz for continued guidance and support on this project, as well as Timothy Bond, Richard Freeman, Stephanie Hurder, Jeff Liebman, Claudia Olivetti, Amanda Pallais, Laszlo Sandor, Emily Glassberg Sands, Alessandra Voena, Justin Wolfers, and seminar participants at Case Western Reserve University, Harvard University, Mathematica Policy Research, UCLA, University of Arizona, University of Illinois-Chicago, University of Iowa, University of Texas-Austin, and the US Census Bureau for helpful comments and discussions. I am also grateful to Larry Katz and Phillip Levine for providing data on oral contraceptive pill access and abortion rates respectively. All remaining errors are my own. This research has been supported by the NSF-IGERT program, \"Multidisciplinary Program in Inequality and Social Policy\" at Harvard University (Grant No. 0333403). The views expressed herein are those of the author and not necessarily those of Mathematica Policy Research.",
"title": ""
},
{
"docid": "5dec9381369e61c30112bd87a044cb2f",
"text": "A limiting factor for the application of IDA methods in many domains is the incompleteness of data repositories. Many records have fields that are not filled in, especially, when data entry is manual. In addition, a significant fraction of the entries can be erroneous and there may be no alternative but to discard these records. But every cell in a database is not an independent datum. Statistical relationships will constrain and, often determine, missing values. Data imputation, the filling in of missing values for partially missing data, can thus be an invaluable first step in many IDA projects. New imputation methods that can handle the large-scale problems and large-scale sparsity of industrial databases are needed. To illustrate the incomplete database problem, we analyze one database with instrumentation maintenance and test records for an industrial process. Despite regulatory requirements for process data collection, this database is less than 50% complete. Next, we discuss possible solutions to the missing data problem. Several approaches to imputation are noted and classified into two categories: data-driven and model-based. We then describe two machine-learning-based approaches that we have worked with. These build upon well-known algorithms: AutoClass and C4.5. Several experiments are designed, all using the maintenance database as a common test-bed but with various data splits and algorithmic variations. Results are generally positive with up to 80% accuracies of imputation. We conclude the paper by outlining some considerations in selecting imputation methods, and by discussing applications of data imputation for intelligent data analysis.",
"title": ""
},
{
"docid": "93d498adaee9070ffd608c5c1fe8e8c9",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "5a69b2301b95976ee29138092fc3bb1a",
"text": "We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.",
"title": ""
},
{
"docid": "f4b5577175cc87aab052a581081811f0",
"text": "This study intends to report a review of the literature on the evolution of the systems information success model, specifically the DeLone & McLean model (1992) during the last twenty-five years. It is also intended to refer the main critics to the model by the various researchers who contributed to its updating, making it one of the most used until today.",
"title": ""
},
{
"docid": "82c37d40a58749aaf75cff5b90eed966",
"text": "The input-output mapping defined by Eq. 1 of the main manuscript is differentiable with respect to both input functions, o(x), c(x), and as such lends itself to end-to-end training with back-propagation. Given a gradient signal δ (·)= ∂L ∂m(·) that dictates how the output layer activations should change to decrease the loss L, we obtain the update equations for c(·) and o(·)=(ox(·),oy(·)) through the following chain rule:",
"title": ""
},
{
"docid": "1f18623625304f7c47ca144c8acf4bc9",
"text": "Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis.",
"title": ""
},
{
"docid": "87f5ed217015a5b9590290fe80278527",
"text": "Probabilistic topic models are widely used in different contexts to uncover the hidden structure in large text corpora. One of the main (and perhaps strong) assumption of these models is that generative process follows a bag-of-words assumption, i.e. each token is independent from the previous one. We extend the popular Latent Dirichlet Allocation model by exploiting three different conditional Markovian assumptions: (i) the token generation depends on the current topic and on the previous token; (ii) the topic associated with each observation depends on topic associated with the previous one; (iii) the token generation depends on the current and previous topic. For each of these modeling assumptions we present a Gibbs Sampling procedure for parameter estimation. Experimental evaluation over real-word data shows the performance advantages, in terms of recall and precision, of the sequence-modeling approaches.",
"title": ""
},
{
"docid": "328db3cbbf53bd26ea8b1cb8d1c197be",
"text": "BACKGROUND\nNarcolepsy with cataplexy is associated with a loss of orexin/hypocretin. It is speculated that an autoimmune process kills the orexin-producing neurons, but these cells may survive yet fail to produce orexin.\n\n\nOBJECTIVE\nTo examine whether other markers of the orexin neurons are lost in narcolepsy with cataplexy.\n\n\nMETHODS\nWe used immunohistochemistry and in situ hybridization to examine the expression of orexin, neuronal activity-regulated pentraxin (NARP), and prodynorphin in hypothalami from five control and two narcoleptic individuals.\n\n\nRESULTS\nIn the control hypothalami, at least 80% of the orexin-producing neurons also contained prodynorphin mRNA and NARP. In the patients with narcolepsy, the number of cells producing these markers was reduced to about 5 to 10% of normal.\n\n\nCONCLUSIONS\nNarcolepsy with cataplexy is likely caused by a loss of the orexin-producing neurons. In addition, loss of dynorphin and neuronal activity-regulated pentraxin may contribute to the symptoms of narcolepsy.",
"title": ""
},
{
"docid": "673e1ec63a0e84cf3fbf450928d89905",
"text": "This study proposed an IoT (Internet of Things) system for the monitoring and control of the aquaculture platform. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. This study permits real-time observation and control of aquaculture platform with dissolved oxygen sensors, temperature sensing elements using A/D and microcontrollers signal conversion. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. This study is to make the best fusion value of multi-odometer measurement data for optimization via the maximum likelihood estimation (MLE).Finally, this paper have good efficient and precise computing in the experimental results.",
"title": ""
},
{
"docid": "6e4f0a770fe2a34f99957f252110b6bd",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "c26e9f486621e37d66bf0925d8ff2a3e",
"text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.",
"title": ""
},
{
"docid": "7074d77d242b4d1ecbebc038c04698b8",
"text": "We discuss our tools and techniques to monitor and inject packets in Bluetooth Low Energy. Also known as BTLE or Bluetooth Smart, it is found in recent high-end smartphones, sports devices, sensors, and will soon appear in many medical devices. We show that we can effectively render useless the encryption of any Bluetooth Low Energy link.",
"title": ""
},
{
"docid": "83a4a89d3819009d61123a146b38d0e9",
"text": "OBJECTIVE\nBehçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.\n\n\nMETHODS\nAn International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.\n\n\nRESULTS\nFor the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.\n\n\nCONCLUSION\nThe new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD.",
"title": ""
},
{
"docid": "c40f1282c12a9acee876d127dffbd733",
"text": "Online markets pose a difficulty for evaluating products, particularly experience goods, such as used cars, that cannot be easily described online. This exacerbates product uncertainty, the buyer’s difficulty in evaluating product characteristics, and predicting how a product will perform in the future. However, the IS literature has focused on seller uncertainty and ignored product uncertainty. To address this void, this study conceptualizes product uncertainty and examines its effects and antecedents in online markets for used cars (eBay Motors).",
"title": ""
}
] |
scidocsrr
|
1af5ed1db3078377a7ff709f07805425
|
Automated Correction for Syntax Errors in Programming Assignments using Recurrent Neural Networks
|
[
{
"docid": "598fd1fc1d1d6cba7a838c17efe9481b",
"text": "The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
}
] |
[
{
"docid": "ef1ca66424bbf52e5029d1599eb02e39",
"text": "The pathogenesis of bacterial vaginosis remains largely elusive, although some microorganisms, including Gardnerella vaginalis, are suspected of playing a role in the etiology of this disorder. Recently culture-independent analysis of microbial ecosystems has proven its efficacy in characterizing the diversity of bacterial populations. Here, we report on the results obtained by combining culture and PCR-based methods to characterize the normal and disturbed vaginal microflora. A total of 150 vaginal swab samples from healthy women (115 pregnant and 35 non-pregnant) were categorized on the basis of Gram stain of direct smear as grade I (n = 112), grade II (n = 26), grade III (n = 9) or grade IV (n = 3). The composition of the vaginal microbial community of eight of these vaginal swabs (three grade I, two grade II and three grade III), all from non-pregnant women, were studied by culture and by cloning of the 16S rRNA genes obtained after direct amplification. Forty-six cultured isolates were identified by tDNA-PCR, 854 cloned 16S rRNA gene fragments were analysed of which 156 by sequencing, yielding a total of 38 species, including 9 presumptively novel species with at least five species that have not been isolated previously from vaginal samples. Interestingly, cloning revealed that Atopobium vaginae was abundant in four out of the five non-grade I specimens. Finally, species specific PCR for A. vaginae and Gardnerella vaginalis pointed to a statistically significant co-occurrence of both species in the bacterial vaginosis samples. Although historically the literature regarding bacterial vaginosis has largely focused on G. vaginalis in particular, several findings of this study – like the abundance of A. vaginae in disturbed vaginal microflora and the presence of several novel species – indicate that much is to be learned about the composition of the vaginal microflora and its relation to the etiology of BV.",
"title": ""
},
{
"docid": "11775f58f85bc3127a5857214ed20df0",
"text": "The immune system can be defined as a complex system that protects the organism against organisms or substances that might cause infection or disease. One of the most fascinating characteristics of the immune system is its capability to recognize and respond to pathogens with significant specificity. Innate and adaptive immune responses are able to recognize for‐ eign structures and trigger different molecular and cellular mechanisms for antigen elimina‐ tion. The immune response is critical to all individuals; therefore numerous changes have taken place during evolution to generate variability and specialization, although the im‐ mune system has conserved some important features over millions of years of evolution that are common for all species. The emergence of new taxonomic categories coincided with the diversification of the immune response. Most notably, the emergence of vertebrates coincid‐ ed with the development of a novel type of immune response. Apparently, vertebrates in‐ herited innate immunity from their invertebrate ancestors [1].",
"title": ""
},
{
"docid": "9970c9a191d9223448d205f0acec6976",
"text": "This paper presents the complete development and analysis of a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the antagonistic arrangement of circular and longitudinal muscle groups of Oligochaetes. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure using a nickel titanium (NiTi) coil actuators wrapped in a spiral pattern around the circumference. An enhanced theoretical model of the NiTi coil spring describes the combination of martensite deformation and spring elasticity as a function of geometry. A numerical model of the mesh structures reveals how peristaltic actuation induces robust locomotion and details the deformation by the contraction of circumferential NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of speed. Utilizing additional NiTi coils placed longitudinally, steering capabilities are incorporated. Proprioceptive potentiometers sense segment contraction, which enables the development of closed-loop controllers. Several appropriate control algorithms are designed and experimentally compared based on locomotion speed and energy consumption. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impact during operation. This approach allows a completely soft robotic platform by employing a flexible control unit and energy sources.",
"title": ""
},
{
"docid": "778e431e83adedb8172cdb55d303c0cc",
"text": "As digital visualization tools have become more ubiquitous, humanists have adopted many applications such as GIS mapping, graphs, and charts for statistical display that were developed in other disciplines. But, I will argue, such graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the google maps and bar charts generated from spread sheets that they pass as unquestioned representations of “what is.” This is the hallmark of realist models of knowledge and needs to be subjected to a radical critique to return the humanistic tenets of constructedness and interpretation to the fore. Realist approaches depend above all upon an idea that phenomena are observer-independent and can be characterized as data. Data pass themselves off as mere descriptions of a priori conditions. Rendering observation (the act of creating a statistical, empirical, or subjective account or image) as if it were the same as the phenomena observed collapses the critical distance between the phenomenal world and its interpretation, undoing the basis of interpretation on which humanistic knowledge production is based. We know this. But we seem ready and eager to suspend critical judgment in a rush to visualization. At the very least, humanists beginning to play at the intersection of statistics and graphics ought to take a detour through the substantial discussions of the sociology of knowledge and its developed critique of realist models of data gathering. 1 At best, we need to take on the challenge of developing graphical expressions rooted in and appropriate to interpretative activity. Because realist approaches to visualization assume transparency and equivalence, as if the phenomenal world were self-evident and the apprehension of it a mere mechanical task, they are fundamentally at odds with approaches to humanities scholarship premised on constructivist principles. I would argue that even for realist models, those that presume an observer-independent reality available to description, the methods of presenting ambiguity and uncertainty in more nuanced terms would be useful. Some significant progress is being made in visualizing uncertainty in data models for GIS, decision-making, archaeological research and other domains. But an important distinction needs to be clear from the outset: the task of representing ambiguity and uncertainty has to be distinguished from a second task – that of using ambiguity and uncertainty as the basis on which a representation is constructed. This is the difference between putting many kinds of points on a map to show degrees of certainty by shades of color, degrees of crispness, transparency etc., and creating a map whose basic coordinate grid is constructed as an effect of these ambiguities. In the first instance, we have a standard map with a nuanced symbol set. In the second, we create a non-standard map that expresses the constructed-ness of space. Both rely on rethinking our approach to visualization and the assumptions that underpin it.",
"title": ""
},
{
"docid": "384a0a9d9613750892225562cb5ff113",
"text": "Large scale, high concurrency, and vast amount of data are important trends for the new generation of website. Node.js becomes popular and successful to build data-intensive web applications. To study and compare the performance of Node.js, Python-Web and PHP, we used benchmark tests and scenario tests. The experimental results yield some valuable performance data, showing that PHP and Python-Web handle much less requests than that of Node.js in a certain time. In conclusion, our results clearly demonstrate that Node.js is quite lightweight and efficient, which is an idea fit for I/O intensive websites among the three, while PHP is only suitable for small and middle scale applications, and Python-Web is developer friendly and good for large web architectures. To the best of our knowledge, this is the first paper to evaluate these Web programming technologies with both objective systematic tests (benchmark) and realistic user behavior tests (scenario), especially taking Node.js as the main topic to discuss.",
"title": ""
},
{
"docid": "905d6847be18d7d200fa4224f4cbb411",
"text": "Reliability of Active Front End (AFE) converter can be improved if converter faults can be identified before startup. Startup diagnostics of the LCL filter can be an useful tool to accomplish this. An algorithm based on Fast Fourier Transform (FFT) of the filter step response is shown to be able to detect variation of filter components. This is extended to consider conditions of short circuit and open circuit of filter components. This method can identify failed components in a LCL filter before operating the converter, which otherwise may lead to undesirable operation. Analytical expressions are derived for the frequency spectrum of the LCL filter during step excitation using continuous Fourier Transform and Discrete Fourier Transform (DFT). A finite state machine Finite State Machine (FSM) based algorithm is use to sequence the startup diagnostics before commencing normal operation of the power converter. It is shown that the additional computational resource required to perform the diagnostic algorithm is small when compared with the overall inverter control program. The diagnostic functions can be readily implemented in advanced digital controllers that are available today. The spectral analysis is supported by simulations and experimental results which validate the proposed method.",
"title": ""
},
{
"docid": "5f5cf5235c10fe84e39e6725705a9940",
"text": "A fully automatic method for descreening halftone images is presented based on convolutional neural networks with end-to-end learning. Incorporating context level information, the proposed method not only removes halftone artifacts but also synthesizes the fine details lost during halftone. The method consists of two main stages. In the first stage, intrinsic features of the scene are extracted, the low-frequency reconstruction of the image is estimated, and halftone patterns are removed. For the intrinsic features, the edges and object-categories are estimated and fed to the next stage as strong visual and contextual cues. In the second stage, fine details are synthesized on top of the low-frequency output based on an adversarial generative model. In addition, the novel problem of rescreening is addressed, where a natural input image is halftoned so as to be similar to a separately given reference halftone image. To this end, a two-stage convolutional neural network is also presented. Both networks are trained with millions of before-and-after example image pairs of various halftone styles. Qualitative and quantitative evaluations are provided, which demonstrates the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "6105d4250286a7a90fe20e6b1ec8a6d3",
"text": "A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N, e) as long as d < N. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N . This raises the question of how much larger d can be: could the attack work with non-negligible probability for d = N 1/4+ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N )) as soon as δ > 1/4 + . Thus Wiener’s success bound d < N 1/4 for his algorithm is essentially tight. We also obtain a converse result for a natural class of extensions of the Wiener attack, which are guaranteed to succeed even when δ > 1/4. The known attacks in this class (by Verheul and Van Tilborg and Dujella) run in exponential time, so it is natural to ask whether there exists an attack in this class with subexponential run-time. Our second converse result answers this question also in the negative.",
"title": ""
},
{
"docid": "d15dc60ef2fb1e6096a3aba372698fd9",
"text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.",
"title": ""
},
{
"docid": "1dbb04e806b1fd2a8be99633807d9f4d",
"text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.",
"title": ""
},
{
"docid": "e14c9687e90cb46492441d01a972bf57",
"text": "This paper describes our efforts to design a cognitive architecture for object recognition in video. Unlike most efforts in computer vision, our work proposes a Bayesian approach to object recognition in video, using a hierarchical, distributed architecture of dynamic processing elements that learns in a self-organizing way to cluster objects in the video input. A biologically inspired innovation is to implement a top-down pathway across layers in the form of causes, creating effectively a bidirectional processing architecture with feedback. To simplify discrimination, overcomplete representations are utilized. Both inference and parameter learning are performed using empirical priors, while imposing appropriate sparseness constraints. Preliminary results show that the cognitive architecture has features that resemble the functional organization of the early visual cortex. One example showing the use of top-down connections is given to disambiguate a synthetic video from correlated noise.",
"title": ""
},
{
"docid": "6f15684a1ad93edb75d2e865f03ad30a",
"text": "Social capital has been identified as crucial to the fostering of resilience in rapidly expanding cities of the Global South. The purpose of this article is to better understand the complexities of urban social interaction and how such interaction can constitute ‘capital’ in achieving urban resilience. A concept analysis was conducted to establish what constitutes social capital, its relevance to vulnerable urban settings and how it can be measured. Social capital is considered to be constituted of three forms of interaction: bonds, bridges and linkages. The characteristics of these forms of interaction may vary according to the social, political, cultural and economic diversity to be found within vulnerable urban settings. A framework is outlined to explore the complex nature of social capital in urban settings. On the basis of an illustrative case study, indicators are established to indicate how culturally specific indicators are required to measure social capital that are sensitive to multiple levels of analysis and the development of a multidimensional framework. The framework outlined ought to be adapted to context and validated by future research.",
"title": ""
},
{
"docid": "37f5fcde86e30359e678ff3f957e3c7e",
"text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.",
"title": ""
},
{
"docid": "186b616c56df44ad55cb39ee63ebe906",
"text": "RIPEMD-160 is a fast cryptographic hash function that is tuned towards software implementations on 32-bit architectures. It has evolved from the 256-bit extension of MD4, which was introduced in 1990 by Ron Rivest [20, 21]. Its main design feature are two different and independent parallel chains, the result of which are combined at the end of every application of the compression function. As suggested by its name, RIPEMD-160 offers a 160-bit result. It is intended to provide a high security level for the next 10 years or more. RIPEMD-128 is a faster variant of RIPEMD-160, which provides a 128-bit result. Together with SHA-1, RIPEMD-160 and RIPEMD-128 have been included in the International Standard ISO/IEC 10118-3, the publication of which is expected for late 1997 [17]. The goal of this article is to motivate the existence of RIPEMD160, to explain the main design features and to provide a concise description of the algorithm.",
"title": ""
},
{
"docid": "6fb06fff9f16024cf9ccf9a782bffecd",
"text": "In this chapter, we discuss 3D compression techniques for reducing the delays in transmitting triangle meshes over the Internet. We first explain how vertex coordinates, which represent surface samples may be compressed through quantization, prediction, and entropy coding. We then describe how the connectivity, which specifies how the surface interpolates these samples, may be compressed by compactly encoding the parameters of a connectivity-graph construction process and by transmitting the vertices in the order in which they are encountered by this process. The storage of triangle meshes compressed with these techniques is usually reduced to about a byte per triangle. When the exact geometry and connectivity of the mesh are not essential, the triangulated surface may be simplified or retiled. Although simplification techniques and the progressive transmission of refinements may be used as a compression tool, we focus on recently proposed retiling techniques designed specifically to improve 3D compression. They are often able to reduce the total storage, which combines coordinates and connectivity, to half-a-bit per triangle without exceeding a mean square error of 1/10,000 of the diagonal of a box that contains the solid.",
"title": ""
},
{
"docid": "c7cd22329f1acd70cb27c08b71a73383",
"text": "The coming century is surely the century of data. A combination of blind faith and serious purpose makes our society invest massively in the collection and processing of data of all kinds, on scales unimaginable until recently. Hyperspectral Imagery, Internet Portals, Financial tick-by-tick data, and DNA Microarrays are just a few of the betterknown sources, feeding data in torrential streams into scientific and business databases worldwide. In traditional statistical data analysis, we think of observations of instances of particular phenomena (e.g. instance ↔ human being), these observations being a vector of values we measured on several variables (e.g. blood pressure, weight, height, ...). In traditional statistical methodology, we assumed many observations and a few, wellchosen variables. The trend today is towards more observations but even more so, to radically larger numbers of variables – voracious, automatic, systematic collection of hyper-informative detail about each observed instance. We are seeing examples where the observations gathered on individual instances are curves, or spectra, or images, or even movies, so that a single observation has dimensions in the thousands or billions, while there are only tens or hundreds of instances available for study. Classical methods are simply not designed to cope with this kind of explosive growth of dimensionality of the observation vector. We can say with complete confidence that in the coming century, high-dimensional data analysis will be a very significant activity, and completely new methods of high-dimensional data analysis will be developed; we just don’t know what they are yet. Mathematicians are ideally prepared for appreciating the abstract issues involved in finding patterns in such high-dimensional data. Two of the most influential principles in the coming century will be principles originally discovered and cultivated by mathematicians: the blessings of dimensionality and the curse of dimensionality. The curse of dimensionality is a phrase used by several subfields in the mathematical sciences; I use it here to refer to the apparent intractability of systematically searching through a high-dimensional space, the apparent intractability of accurately approximating a general high-dimensional function, the apparent intractability of integrating a high-dimensional function. The blessings of dimensionality are less widely noted, but they include the concentration of measure phenomenon (so-called in the geometry of Banach spaces), which means that certain random fluctuations are very well controlled in high dimensions and the success of asymptotic methods, used widely in mathematical statistics and statistical physics, which suggest that statements about very high-dimensional settings may be made where moderate dimensions would be too complicated. There is a large body of interesting work going on in the mathematical sciences, both to attack the curse of dimensionality in specific ways, and to extend the benefits",
"title": ""
},
{
"docid": "fefd1c20391ac59698c80ab9c017bae3",
"text": "Compensating changes between a subjects' training and testing session in brain-computer interfacing (BCI) is challenging but of great importance for a robust BCI operation. We show that such changes are very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space. This novel approach to learning from other subjects aims to reduce the adverse effects of common nonstationarities, but does not transfer discriminative information. This is an important conceptual difference to standard multi-subject methods that, e.g., improve the covariance matrix estimation by shrinking it toward the average of other users or construct a global feature space. These methods do not reduces the shift between training and test data and may produce poor results when subjects have very different signal characteristics. In this paper, we compare our approach to two state-of-the-art multi-subject methods on toy data and two datasets of EEG recordings from subjects performing motor imagery. We show that it can not only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.",
"title": ""
},
{
"docid": "f06cf2892c85fc487d50c17a87061a0d",
"text": "Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning models, we teased apart contributions from putatively instrumental and Pavlovian components in the generation of the observed asymmetry during learning. Moreover, using model-based fMRI, we showed that BOLD signals in striatum and substantia nigra/ventral tegmental area (SN/VTA) correlated with instrumentally learnt action values, but with opposite signs for go and no-go choices. Finally, we showed that successful instrumental learning depends on engagement of bilateral inferior frontal gyrus. Our behavioral and computational data showed that instrumental learning is contingent on overcoming inherent and plastic Pavlovian biases, while our neuronal data showed this learning is linked to unique patterns of brain activity in regions implicated in action and inhibition respectively.",
"title": ""
},
{
"docid": "b3be9d730d982c66657eceacb9d4e526",
"text": "Ontology Matching aims to find the semantic correspondences between ontologies that belong to a single domain but that have been developed separately. However, there are still some problem areas to be solved, because experts are still needed to supervise the matching processes and an efficient way to reuse the alignments has not yet been found. We propose a novel technique named Reverse Ontology Matching, which aims to find the matching functions that were used in the original process. The use of these functions is very useful for aspects such as modeling behavior from experts, performing matching-by-example, reverse engineering existing ontology matching tools or compressing ontology alignment repositories. Moreover, the results obtained from a widely used benchmark dataset provide evidence of the effectiveness of this approach.",
"title": ""
},
{
"docid": "1fbf8b8ec80e7be388b52d4cbb57dfa8",
"text": "Quadcopter also called as quadrotor helicopter, is popular in Unmanned Aerial Vehicles (UAV). They are widely used for variety of applications due to its small size and high stability. In this paper design and development of remote controlled quadcopter using PID (Proportional Integral Derivtive) controller implemented with Ardupilot Mega board is presented. The system consists of IMU (Inertial Measurement Unit) which consists of accelerometer and gyro sensors to determine the system orientation and speed control of four BLDC motors to enable the quadcopter fly in six directions. Simulations analysis of quadcopter is carried out using MATLAB Simulink. Pitch, roll and yaw responses of quadcopter is obtained and PID controller is used to stabilize the system response. Finally the prototype of quadcopter is build PID logic is embedded on it. The working and performance of quadcopter is tested and desired outputs were obtained.",
"title": ""
}
] |
scidocsrr
|
0c815c1267ef9d34a647fc686a345a0e
|
Framework for developing volcanic fragility and vulnerability functions for critical infrastructure
|
[
{
"docid": "4630cb81feb8519de1e12d9061d557f3",
"text": "Estimation of fragility functions using dynamic structural analysis is an important step in a number of seismic assessment procedures. This paper discusses the applicability of statistical inference concepts for fragility function estimation, describes appropriate fitting approaches for use with various structural analysis strategies, and studies how to fit fragility functions while minimizing the required number of structural analyses. Illustrative results show that multiple stripe analysis produces more efficient fragility estimates than incremental dynamic analysis for a given number of structural analyses, provided that some knowledge of the building’s capacity is available prior to analysis so that relevant portions of the fragility curve can be approximately identified. This finding has other benefits, as the multiple stripe analysis approach allows for different ground motions to be used for analyses at varying intensity levels, to represent the differing characteristics of low intensity and high intensity shaking. The proposed assessment approach also provides a framework for evaluating alternate analysis procedures that may arise in the future.",
"title": ""
}
] |
[
{
"docid": "163cee9000ecd421334a507958491a25",
"text": "It has been assumed that the physical separation ('air-gap') of computers provides a reliable level of security, such that should two adjacent computers become compromised, the covert exchange of data between them would be impossible. In this paper, we demonstrate BitWhisper, a method of bridging the air-gap between adjacent compromised computers by using their heat emissions and built-in thermal sensors to create a covert communication channel. Our method is unique in two respects: it supports bidirectional communication, and it requires no additional dedicated peripheral hardware. We provide experimental results based on the implementation of the Bit-Whisper prototype, and examine the channel's properties and limitations. Our experiments included different layouts, with computers positioned at varying distances from one another, and several sensor types and CPU configurations (e.g., Virtual Machines). We also discuss signal modulation and communication protocols, showing how BitWhisper can be used for the exchange of data between two computers in a close proximity (positioned 0-40 cm apart) at an effective rate of 1-8 bits per hour, a rate which makes it possible to infiltrate brief commands and exfiltrate small amount of data (e.g., passwords) over the covert channel.",
"title": ""
},
{
"docid": "5d7b5f1b46854b9f81c2c71ca1cf8b7d",
"text": "| Discovering association rules is one of the most important task in data mining. Many eecient algorithms have been proposed in the literature. The most noticeable are Apriori, Mannila's algorithm, Partition, Sampling and DIC, that are all based on the Apriori mining method: pruning the subset lattice (itemset lattice). In this paper we propose an eecient algorithm, called Close, based on a new mining method: pruning the closed set lattice (closed itemset lattice). This lattice, which is a sub-order of the subset lattice, is closely related to Wille's concept lattice in formal concept analysis. Experiments comparing Close to an optimized version of Apriori showed that Close is very eecient for mining dense and/or correlated data such as census style data, and performs reasonably well for market basket style data.",
"title": ""
},
{
"docid": "963f97c27adbc7d1136e713247e9a852",
"text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.",
"title": ""
},
{
"docid": "524a7c387364119eef34befa28fded00",
"text": "In this thesis work, we investigated the formation and shape dynamics of artificially created lipid nanotubes, and explored the on-demand generation of intercellular nanotubes. Methods and techniques to produce networks of phospholipid vesicles and lipid nanotubes, as well as networks of biological cells and lipid nanotubes, are described in this thesis. We also describe means to transport membrane material and network-internalized molecules and particles through the nanotubes. The capabilities, limitations and experimental requirements of these methods were analyzed. Starting from simple networks interconnecting two giant liposomes, freefloating floating lipid nanotubes were produced by electrofission. After releasing a lipid nanotube of several hundreds of micrometers in length from its suspension points, shape transformations through different stages were observed and characterized, and the influence of membrane-embedded molecules on these transformations were established. A method was developed for on-demand generation of nanotubes between biological cells by means of micromanipulation and microinjection techniques originally developed for vesicle-nanotube networks. The new experimental model structures can greatly facilitate fundamental studies of cell-to-cell communication, the exchange of cell constituents and components, and the dynamics of biochemical reactions in native cell environments. Lastly, we investigated the cells' capability to probe free space in its immediate vicinity. We designed a micropatterning strategy for selective cell immobilization and directed cell growth. A new microfabrication protocol for high-resolution patterngeneration of Teflon AF was developed for that purpose. The new surfaces enabled cell growth in specific orientations to each other, which allowed us to determine the distance-requirements for tunneling nanotube-like conduit formation. The research results collected in this thesis represent a systematic approach towards on-demand generation and application of intercellular lipid nanotube connections, which is of importance for the understanding of, and eventually, full control over cellular communication networks.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "53d14e6dc9af930b5866b973731df5f5",
"text": "In recent years, malware has emerged as a critical security threat. In addition, malware authors continue to embed numerous anti-detection features to evade the existing malware detection approaches. Against this advanced class of malicious programs, dynamic behavior-based malware detection approaches outperform the traditional signature-based approaches by neutralizing the effects of obfuscation and morphing techniques. The majority of dynamic behavior detectors rely on system-calls to model the infection and propagation dynamics of malware. However, these approaches do not account an important anti-detection feature of modern malware, i.e., systemcall injection attack. This attack allows the malicious binaries to inject irrelevant and independent system-calls during the program execution thus modifying the execution sequences defeating the existing system-call-based detection. To address this problem, we propose an evasion-proof solution that is not vulnerable to system-call injection attacks. Our proposed approach characterizes program semantics using asymptotic equipartition property (AEP) mainly applied in information theoretic domain. The AEP allows us to extract information-rich call sequences that are further quantified to detect the malicious binaries. Furthermore, the proposed detection model is less vulnerable to call-injection attacks as the discriminating components are not directly visible to malware authors. We run a thorough set of experiments to evaluate our solution and compare it with the existing system-call-based malware detection techniques. The results demonstrate that the proposed solution is effective in identifying real malware instances.",
"title": ""
},
{
"docid": "dea542750f52af43a9cc2418946582fe",
"text": "Interleaving is an online evaluation method that compares two ranking functions by mixing their results and interpreting the users' click feedback. An important property of an interleaving method is its sensitivity, i.e. the ability to obtain reliable comparison outcomes with few user interactions. Several methods have been proposed so far to improve interleaving sensitivity, which can be roughly divided into two areas: (a) methods that optimize the credit assignment function (how the click feedback is interpreted), and (b) methods that achieve higher sensitivity by controlling the interleaving policy (how often a particular interleaved result page is shown).\n In this paper, we propose an interleaving framework that generalizes the previously studied interleaving methods in two aspects. First, it achieves a higher sensitivity by performing a joint data-driven optimization of the credit assignment function and the interleaving policy. Second, we formulate the framework to be general w.r.t. the search domain where the interleaving experiment is deployed, so that it can be applied in domains with grid-based presentation, such as image search. In order to simplify the optimization, we additionally introduce a stratified estimate of the experiment outcome. This stratification is also useful on its own, as it reduces the variance of the outcome and thus increases the interleaving sensitivity.\n We perform an extensive experimental study using large-scale document and image search datasets obtained from a commercial search engine. The experiments show that our proposed framework achieves marked improvements in sensitivity over effective baselines on both datasets.",
"title": ""
},
{
"docid": "f63d6cd35ac9ea46ee2a162fe8f68efa",
"text": "In the fashion industry, order scheduling focuses on the assignment of production orders to appropriate product ion lines. In reality, before a new order can be put into production, a series of activities known as pre-production events need t o be completed. In addition, in real production process, owing to various uncertainties, the daily production quantity of each order is not always as expected. In this research, by conside ring the pre-production events and the uncertainties in the dail y production quantity, robust order scheduling problems in the fashion industry are investigated with the aid of a multi-objective evolutionary algorithm (MOEA) called nondominated sorting adaptive differential evolution (NSJADE). The experimental results illustrate that it is of paramount importance to consider pre-production events in order scheduling problems in the f ashion industry. We also unveil that the existence of the uncertain ties in the daily production quantity heavily affects the order scheduling.",
"title": ""
},
{
"docid": "27be379b6192aa6db9101b7ec18d5585",
"text": "In this paper, we investigate the problem of detecting depression from recordings of subjects' speech using speech processing and machine learning. There has been considerable interest in this problem in recent years due to the potential for developing objective assessments from real-world behaviors, which may provide valuable supplementary clinical information or may be useful in screening. The cues for depression may be present in “what is said” (content) and “how it is said” (prosody). Given the limited amounts of text data, even in this relatively large study, it is difficult to employ standard method of learning models from n-gram features. Instead, we learn models using word representations in an alternative feature space of valence and arousal. This is akin to embedding words into a real vector space albeit with manual ratings instead of those learned with deep neural networks [1]. For extracting prosody, we employ standard feature extractors such as those implemented in openSMILE and compare them with features extracted from harmonic models that we have been developing in recent years. Our experiments show that our features from harmonic model improve the performance of detecting depression from spoken utterances than other alternatives. The context features provide additional improvements to achieve an accuracy of about 74%, sufficient to be useful in screening applications.",
"title": ""
},
{
"docid": "642078190a7df09c19d012b492152540",
"text": "Research has examined the benefits and costs of employing adults with autism spectrum disorder (ASD) from the perspective of the employee, taxpayer and society, but few studies have considered the employer perspective. This study examines the benefits and costs of employing adults with ASD, from the perspective of employers. Fifty-nine employers employing adults with ASD in open employment were asked to complete an online survey comparing employees with and without ASD on the basis of job similarity. The findings suggest that employing an adult with ASD provides benefits to employers and their organisations without incurring additional costs.",
"title": ""
},
{
"docid": "aaf1aac789547c1bf2f918368b43c955",
"text": "Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g. strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture. Similar sections of music can be detected by clustering segments with similar average textures. The repetition of a sequence of music often marks a logical segment. Repeated phrases and hierarchical structures can be discovered by finding similar sequences of feature vectors within a piece of music. Structure analysis can be used to construct music summaries and to assist music browsing. Introduction Probably everyone would agree that music has structure, but most of the interesting musical information that we perceive lies hidden below the complex surface of the audio signal. From this signal, human listeners perceive vocal and instrumental lines, orchestration, rhythm, harmony, bass lines, and other features. Unfortunately, music audio signals have resisted our attempts to extract this kind of information. Researchers are making progress, but so far, computers have not come near to human levels of performance in detecting notes, processing rhythms, or identifying instruments in a typical (polyphonic) music audio texture. On a longer time scale, listeners can hear structure including the chorus and verse in songs, sections in other types of music, repetition, and other patterns. One might think that without the reliable detection and identification of short-term features such as notes and their sources, that it would be impossible to deduce any information whatsoever about even higher levels of abstraction. Surprisingly, it is possible to automatically detect a great deal of information concerning music structure. For example, it is possible to label the structure of a song as AABA, meaning that opening material (the “A” part) is repeated once, then contrasting material (the “B” part) is played, and then the opening material is played again at the end. This structural description may be deduced from low-level audio signals. Consequently, a computer might locate the “chorus” of a song without having any representation of the melody or rhythm that characterizes the chorus. Underlying almost all work in this area is the concept that structure is induced by the repetition of similar material. This is in contrast to, say, speech recognition, where there is a common understanding of words, their structure, and their meaning. A string of unique words can be understood using prior knowledge of the language. Music, however, has no language or dictionary (although there are certainly known forms and conventions). In general, structure can only arise in music through repetition or systematic transformations of some kind. Repetition implies there is some notion of similarity. Similarity can exist between two points in time (or at least two very short time intervals), similarity can exist between two sequences over longer time intervals, and similarity can exist between the longer-term statistical behaviors of acoustical features. Different approaches to similarity will be described. Similarity can be used to segment music: contiguous regions of similar music can be grouped together into segments. Segments can then be grouped into clusters. The segmentation of a musical work and the grouping of these segments into clusters is a form of analysis or “explanation” of the music. R. Dannenberg and M. Goto Music Structure 16 April 2005 2 Features and Similarity Measures A variety of approaches are used to measure similarity, but it should be clear that a direct comparison of the waveform data or individual samples will not be useful. Large differences in waveforms can be imperceptible, so we need to derive features of waveform data that are more perceptually meaningful and compare these features with an appropriate measure of similarity. Feature Vectors for Spectrum, Texture, and Pitch Different features emphasize different aspects of the music. For example, mel-frequency cepstral coefficients (MFCCs) seem to work well when the general shape of the spectrum but not necessarily pitch information is important. MFCCs generally capture overall “texture” or timbral information (what instruments are playing in what general pitch range), but some pitch information is captured, and results depend upon the number of coefficients used as well as the underlying musical signal. When pitch is important, e.g. when searching for similar harmonic sequences, the chromagram is effective. The chromagram is based on the idea that tones separated by octaves have the same perceived value of chroma (Shepard 1964). Just as we can describe the chroma aspect of pitch, the short term frequency spectrum can be restructured into the chroma spectrum by combining energy at different octaves into just one octave. The chroma vector is a discretized version of the chroma spectrum where energy is summed into 12 log-spaced divisions of the octave corresponding to pitch classes (C, C#, D, ... B). By analogy to the spectrogram, the discrete chromagram is a sequence of chroma vectors. It should be noted that there are several variations of the chromagram. The computation typically begins with a short-term Fourier transform (STFT) which is used to compute the magnitude spectrum. There are different ways to “project” this onto the 12-element chroma vector. Each STFT bin can be mapped directly to the most appropriate chroma vector element (Bartsch and Wakefield 2001), or the STFT bin data can be interpolated or windowed to divide the bin value among two neighboring vector elements (Goto 2003a). Log magnitude values can be used to emphasize the presence of low-energy harmonics. Values can also be averaged, summed, or the vector can be computed to conserve the total energy. The chromagram can also be computed by using the Wavelet transform. Regardless of the exact details, the primary attraction of the chroma vector is that, by ignoring octaves, the vector is relatively insensitive to overall spectral energy distribution and thus to timbral variations. However, since fundamental frequencies and lower harmonics of tones feature prominently in the calculation of the chroma vector, it is quite sensitive to pitch class content, making it ideal for the detection of similar harmonic sequences in music. While MFCCs and chroma vectors can be calculated from a single short term Fourier transform, features can also be obtained from longer sequences of spectral frames. Tzanetakis and Cook (1999) use means and variances of a variety of features in a one second window. The features include the spectral centroid, spectral rolloff, spectral flux, and RMS energy. Peeters, La Burthe, and Rodet (2002) describe “dynamic” features, which model the variation of the short term spectrum over windows of about one second. In this approach, the audio signal is passed through a bank of Mel filters. The time-varying magnitudes of these filter outputs are each analyzed by a short term Fourier transform. The resulting set of features, the Fourier coefficients from each Mel filter output, is large, so a supervised learning scheme is used to find features that maximize the mutual information between feature values and hand-labeled music structures. Measures of Similarity Given a feature vector such as the MFCC or chroma vector, some measure of similarity is needed. One possibility is to compute the (dis)similarity using the Euclidean distance between feature vectors. Euclidean distance will be dependent upon feature magnitude, which is often a measure of the overall R. Dannenberg and M. Goto Music Structure 16 April 2005 3 music signal energy. To avoid giving more weight to the louder moments of music, feature vectors can be normalized, for example, to a mean of zero and a standard deviation of one or to a maximum element of one. Alternatively, similarity can be measured using the scalar (dot) product of the feature vectors. This measure will be larger when feature vectors have a similar direction. As with Euclidean distance, the scalar product will also vary as a function of the overall magnitude of the feature vectors. If the dot product is normalized by the feature vector magnitudes, the result is equal to the cosine of the angle between the vectors. If the feature vectors are first normalized to have a mean of zero, the cosine angle is equivalent to the correlation, another measure that has been used with success. Lu, Wang, and Zhang (Lu, Wang, and Zhang 2004) use a constant-Q transform (CQT), and found that CQT outperforms chroma and MFCC features using a cosine distance measure. They also introduce a “structure-based” distance measure that takes into account the harmonic structure of spectra to emphasize pitch similarity over timbral similarity, resulting in additional improvement in a music structure analysis task. Similarity can be calculated between individual feature vectors, as suggested above, but similarity can also be computed over a window of feature vectors. The measure suggested by Foote (1999) is vector correlation:",
"title": ""
},
{
"docid": "7000ea96562204dfe2c0c23f7cdb6544",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "4dc38ae50a2c806321020de4a140ed5f",
"text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.",
"title": ""
},
{
"docid": "5654b5a5f0be4a888784bdb1f94440fe",
"text": "A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.",
"title": ""
},
{
"docid": "fee1419f689259bc5fe7e4bfd8f0242c",
"text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.",
"title": ""
},
{
"docid": "3172304147c13068b6cec8fd252cda5e",
"text": "Widespread growth of open wireless hotspots has made it easy to carry out man-in-the-middle attacks and impersonate web sites. Although HTTPS can be used to prevent such attacks, its universal adoption is hindered by its performance cost and its inability to leverage caching at intermediate servers (such as CDN servers and caching proxies) while maintaining end-to-end security. To complement HTTPS, we revive an old idea from SHTTP, a protocol that offers end-to-end web integrity without confidentiality. We name the protocol HTTPi and give it an efficient design that is easy to deploy for today’s web. In particular, we tackle several previously-unidentified challenges, such as supporting progressive page loading on the client’s browser, handling mixed content, and defining access control policies among HTTP, HTTPi, and HTTPS content from the same domain. Our prototyping and evaluation experience show that HTTPi incurs negligible performance overhead over HTTP, can leverage existing web infrastructure such as CDNs or caching proxies without any modifications to them, and can make many of the mixed-content problems in existing HTTPS web sites easily go away. Based on this experience, we advocate browser and web server vendors to adopt HTTPi.",
"title": ""
},
{
"docid": "2e976aa51bc5550ad14083d5df7252a8",
"text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.",
"title": ""
},
{
"docid": "39b2c607c29c21d86b8d250886725ab3",
"text": "Central auditory processing disorder (CAPD) may be viewed as a multidimensional entity with far-reaching communicative, educational, and psychosocial implications for which differential diagnosis not only is possible but also is essential to an understanding of its impact and to the development of efficacious, deficit-specific management plans. This paper begins with a description of some behavioral central auditory assessment tools in current clinical use. Four case studies illustrate the utility of these tools in clarifying the nature of auditory difficulties. Appropriate treatment options that flow logically from the diagnoses are given in each case. The heterogeneity of the population presenting with auditory processing problems, not unexpected based on this model, is made clear, as is the clinical utility of central auditory tests in the transdisciplinary assessment and management of children's language and learning difficulties.",
"title": ""
},
{
"docid": "f6ec04f704c58514865206f759ac6d67",
"text": "Speech recognition is the key to realize man-machine interface technology. In order to improve the accuracy of speech recognition and implement the module on embedded system, an embedded speaker-independent isolated word speech recognition system based on ARM is designed after analyzing speech recognition theory. The system uses DTW algorithm and improves the algorithm using a parallelogram to extract characteristic parameters and identify the results. To finish the speech recognition independently, the system uses the STM32 series chip combined with the other external circuitry. The results of speech recognition test can achieve 90%, and which meets the real-time requirements of recognition.",
"title": ""
}
] |
scidocsrr
|
7359c0f46b2022d6470c9a09e6184bde
|
Arabic sign language recognition using the leap motion controller
|
[
{
"docid": "9e2db834da4eb5d226afec4f8dd58c4c",
"text": "This paper introduces a new hand gesture recognition technique to recognize Arabic sign language alphabet and converts it into voice correspondences to enable Arabian deaf people to interact with normal people. The proposed technique captures a color image for the hand gesture and converts it into YCbCr color space that provides an efficient and accurate way to extract skin regions from colored images under various illumination changes. Prewitt edge detector is used to extract the edges of the segmented hand gesture. Principal Component Analysis algorithm is applied to the extracted edges to form the predefined feature vectors for signs and gestures library. The Euclidean distance is used to measure the similarity between the signs feature vectors. The nearest sign is selected and the corresponding sound clip is played. The proposed technique is used to recognize Arabic sign language alphabets and the most common Arabic gestures. Specifically, we applied the technique to more than 150 signs and gestures with accuracy near to 97% at real time test for three different signers. The detailed of the proposed technique and the experimental results are discussed in this paper.",
"title": ""
}
] |
[
{
"docid": "92b61bc041b3b35687ba1cd6f5468941",
"text": "Many organizations adopt cyclical processes to articulate and engineer technological responses to their business needs. Their objective is to increase competitive advantage and add value to the organization's processes, services and deliverables, in line with the organization's vision and strategy. The major challenges in achieving these objectives include the rapid changes in the business and technology environments themselves, such as changes to business processes, organizational structure, architectural requirements, technology infrastructure and information needs. No activity or process is permanent in the organization. To achieve their objectives, some organizations have adopted an Enterprise Architecture (EA) approach, others an Information Technology (IT) strategy approach, and yet others have adopted both EA and IT strategy for the same primary objectives. The deployment of EA and IT strategy for the same aims and objectives raises question whether there is conflict in adopting both approaches. The paper and case study presented here, aimed at both academics and practitioners, examines how EA could be employed as IT strategy to address both business and IT needs and challenges.",
"title": ""
},
{
"docid": "8919fb37c9cb09e01a949849b326a02b",
"text": "Soil and nutrient depletion from intensive use of land is a critical issue for food production. An understanding of whether the soil is adequately treated with appropriate crop management practices in real-time during production cycles could prevent soil erosion and the overuse of natural or artificial resources to keep the soil healthy and suitable for planting. Precision agriculture traditionally uses expensive techniques to monitor the health of soil and crops including images from satellites and airplanes. Recently there are several studies using drones and a multitude of sensors connected to farm machinery to observe and measure the health of soil and crops during planting and harvesting. This paper describes a real-time, in-situ agricultural internet of things (IoT) device designed to monitor the state of the soil and the environment. This device was designed to be compatible with open hardware and it is composed of temperature and humidity sensors (soil and environment), electrical conductivity of the soil and luminosity, Global Positioning System (GPS) and a ZigBee radio for data communication. The field trial involved soil testing and measurements of the local climate in Sao Paulo, Brazil. The measurements of soil temperature, humidity and conductivity are used to monitor soil conditions. The local climate data could be used to support decisions about irrigation and other activities related to crop health. On-going research includes methods to reduce the consumption of energy and increase the number of sensors. Future applications include the use of the IoT device to detect fire in crops, a common problem in sugar cane crops and the integration of the IoT device with irrigation management systems to improve water usage.",
"title": ""
},
{
"docid": "bfc8a36a8b3f1d74bad5f2e25ad3aae5",
"text": "This paper presents a novel ac-dc power factor correction (PFC) power conversion architecture for a single-phase grid interface. The proposed architecture has significant advantages for achieving high efficiency, good power factor, and converter miniaturization, especially in low-to-medium power applications. The architecture enables twice-line-frequency energy to be buffered at high voltage with a large voltage swing, enabling reduction in the energy buffer capacitor size and the elimination of electrolytic capacitors. While this architecture can be beneficial with a variety of converter topologies, it is especially suited for the system miniaturization by enabling designs that operate at high frequency (HF, 3-30 MHz). Moreover, we introduce circuit implementations that provide efficient operation in this range. The proposed approach is demonstrated for an LED driver converter operating at a (variable) HF switching frequency (3-10 MHz) from 120 Vac, and supplying a 35 Vdc output at up to 30 W. The prototype converter achieves high efficiency (92%) and power factor (0.89), and maintains a good performance over a wide load range. Owing to the architecture and HF operation, the prototype achieves a high “box” power density of 50 W/in3 (“displacement” power density of 130 W/in3), with miniaturized inductors, ceramic energy buffer capacitors, and a small-volume EMI filter.",
"title": ""
},
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "8c575ae46ac2969c19a841c7d9a8cb5a",
"text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.",
"title": ""
},
{
"docid": "e035233d3787ea79c446d1716553d41e",
"text": "In this paper, we propose a method of detecting and classifying web application attacks. In contrast to current signature-based security methods, our solution is an ontology based technique. It specifies web application attacks by using semantic rules, the context of consequence and the specifications of application protocols. The system is capable of detecting sophisticated attacks effectively and efficiently by analyzing the specified portion of a user request where attacks are possible. Semantic rules help to capture the context of the application, possible attacks and the protocol that was used. These rules also allow inference to run over the ontological models in order to detect, the often complex polymorphic variations of web application attacks. The ontological model was developed using Description Logic that was based on the Web Ontology Language (OWL). The inference rules are Horn Logic statements and are implemented using the Apache JENA framework. The system is therefore platform and technology independent. Prior to the evaluation of the system the knowledge model was validated by using OntoClean to remove inconsistency, incompleteness and redundancy in the specification of ontological concepts. The experimental results show that the detection capability and performance of our system is significantly better than existing state of the art solutions. The system successfully detects web application attacks whilst generating few false positives. The examples that are presented demonstrate that a semantic approach can be used to effectively detect zero day and more sophisticated attacks in a real-world environment. 2013 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "8e80d35cd01bde9b34651ca14e715171",
"text": "A complementary metal-oxide semiconductor (CMOS) single-stage cascode low-noise amplifier (LNA) is presented in this paper. The microwave monolithic integrated circuit (MMIC) is fabricated using digital 90-nm silicon-on-insulator (SOI) technology. All impedance matching and bias elements are implemented on the compact chip, which has a size of 0.6 mm /spl times/ 0.3 mm. The supply voltage and supply current are 2.4 V and 17 mA, respectively. At 35 GHz and 50 /spl Omega/ source/load impedances, a gain of 11.9 dB, a noise figure of 3.6 dB, an output compression point of 4 dBm, an input return loss of 6 dB, and an output return loss of 18 dB are measured. The -3-dB frequency bandwidth ranges from 26 to 42 GHz. All results include the pad parasitics. To the knowledge of the author, the results are by far the best for a silicon-based millimeter-wave LNA reported to date. The LNA is well suited for systems operating in accordance to the local multipoint distribution service (LMDS) standards at 28 and 38 GHz and the multipoint video distribution system (MVDS) standard at 42 GHz.",
"title": ""
},
{
"docid": "fccadc4b9c930c06bced5272f6791af1",
"text": "BACKGROUND\nA sustained virological response (SVR) rate of 41% has been achieved with interferon alfa-2b plus ribavirin therapy of chronic hepatitis C. In this randomised trial, peginterferon alfa-2b plus ribavirin was compared with interferon alfa-2b plus ribavirin.\n\n\nMETHODS\n1530 patients with chronic hepatitis C were assigned interferon alfa-2b (3 MU subcutaneously three times per week) plus ribavirin 1000-1200 mg/day orally, peginterferon alfa-2b 1.5 microg/kg each week plus 800 mg/day ribavirin, or peginterferon alfa-2b 1.5 microg/kg per week for 4 weeks then 0.5 microg/kg per week plus ribavirin 1000-1200 mg/day for 48 weeks. The primary endpoint was the SVR rate (undetectable hepatitis C virus [HCV] RNA in serum at 24-week follow-up). Analyses were based on patients who received at least one dose of study medication.\n\n\nFINDINGS\nThe SVR rate was significantly higher (p=0.01 for both comparisons) in the higher-dose peginterferon group (274/511 [54%]) than in the lower-dose peginterferon (244/514 [47%]) or interferon (235/505 [47%]) groups. Among patients with HCV genotype 1 infection, the corresponding SVR rates were 42% (145/348), 34% (118/349), and 33% (114/343). The rate for patients with genotype 2 and 3 infections was about 80% for all treatment groups. Secondary analyses identified bodyweight as an important predictor of SVR, prompting comparison of the interferon regimens after adjusting ribavirin for bodyweight (mg/kg). Side-effect profiles were similar between the treatment groups.\n\n\nINTERPRETATION\nIn patients with chronic hepatitis C, the most effective therapy is the combination of peginterferon alfa-2b 1.5 microg/kg per week plus ribavirin. The benefit is mostly achieved in patients with HCV genotype 1 infections.",
"title": ""
},
{
"docid": "b6bd84055f8d04781897cb761bf3d5c1",
"text": "Content-centric network (CCN) has been considered for VANETs, due to its scalability, flexibility and security. In this paper, an extended CCN architecture is proposed to reduce critical data access time in vehicular content centric networks by storing them in the number of vehicles around the source. The critical data include sensitive data such as accidents, traffic that require quick and reliable access. The aim is to reduce access time for this type of data in the discovery phase. Each vehicle stores or updates critical data in cooperation with other vehicles in the preparation phase. During the preparation phase, the critical data spread around the source by a combination of physical movement and radio propagation. This gives the requesting vehicle a better chance to find the data in a few hops, with minimal network overhead and lower delay, during the discovery phase. The efficiency of this architecture is evaluated by simulation and shown that interest packet delay and number of duplicated data packets are generally lower than the existing methods.",
"title": ""
},
{
"docid": "bfabda524f84f5451e54767c51b05efd",
"text": "Given the scarcity of spectral resources in traditional wireless networks, it has become popular to construct visible light communication (VLC) systems. They exhibit high energy efficiency, wide unlicensed communication bandwidth as well as innate security; hence, they may become part of future wireless systems. However, considering the limited coverage and dense deployment of light-emitting diode (LED) lamps, traditional network association strategies are not readily applicable to VLC networks. Hence, by exploiting the power of online learning algorithms, we focus our attention on sophisticated multi-LED access point selection strategies conceived for hybrid indoor LiFi-WiFi communication systems. We formulate a multi-armed bandit model for supporting the decisions on beneficially selecting LED access points. Moreover, the ‘exponential weights for exploration and exploitation’ algorithm and the ‘exponentially weighted algorithm with linear programming’ algorithm are invoked for updating the decision probability distribution, followed by determining the upper bound of the associated accumulation reward function. Significant throughput gains can be achieved by the proposed network association strategies.",
"title": ""
},
{
"docid": "8a4f5f0c7dd247d5022ba401217b82ec",
"text": "Purpose: To assess the potential of Azolla filiculoides, total body collected from a rice farm in northern Iran as source for biodiesel production. Methods: Solvent extraction using Soxhlet apparatus with chloroform-methanol (2:1 v/v) solvent blend was used to obtain crude oil from freeze-dried the Azolla plant. Acid-catalyzed transesterification was used to convert fatty acids (FA), monoglycerides (MG), diglycerides (DG) and triglycerides (TG) in the extracts to fatty acid methyl esters (FAMEs) by acid-catalyzed methylation. Gas chromatography–mass spectrometry (GC–MS) was employed to analyze the FAMEs in the macroalgae biodiesel. Results: The presence of myristic acid (C14:0), palmitic acid (C16:0), palmitoleic acid (C16:1), myristic acid (C14:0), stearic acid (C18:3), oleic acid (C18:1) and linoleic acid 9C18:2), eicosenoic acid (C20:1), eicosapentaenoic acid (C20:5), erucic acid (C22:1) and docosahexaenoic acid (C22:6) in the macroalgae biodiesel was confirmed. Conclusion: The results indicate that biodiesel can be produced from macroalgae and that water fern is potentially an economical source of biodiesel due its ready availability and probable low cost.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "90cbb02beb09695320d7ab72d709b70e",
"text": "Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "fb00acfa20d32b727241e27bd4e3fcab",
"text": "Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, highquality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereoand multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.",
"title": ""
},
{
"docid": "f87e64901ede5cc11dbb14f59cd95e80",
"text": "This paper presents a methodology to develop a dimensional data warehouse by integrating all three development approaches such as supply-driven, goal-driven and demand-driven. By having the combination of all three approaches, the final design will ensure that user requirements, company interest and existing source of data are included in the model. We proposed an automatic system using ontology as the knowledge domain. Starting from operational ER-D (Entity Relationship-Diagram), the selection of facts table, verification of terms and consistency checking will utilize domain ontology. The model will also be verified against user and company requirements. Any discrepancy in the final design requires designer and user intervention. The proposed methodology is supported by a prototype using a business data warehouse example.",
"title": ""
},
{
"docid": "851de4b014dfeb6f470876896b0416b3",
"text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.",
"title": ""
},
{
"docid": "ba4d30e7ea09d84f8f7d96c426e50f34",
"text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.",
"title": ""
},
{
"docid": "66ea2ae7ba5cecbd72f118127ceeebac",
"text": "Text in scene images can provide very useful as well as vital information and hence, its detection and recognition is an important task. We propose an adaptive edge-based connected-component method for text-detection in natural scene images. The approach is based on three reasonable assumptions - (i) characters of a particular word are locally aligned in a certain direction (ii) each character is of uniform color ( iii) stroke width is almost constant for most of the characters in a particular word. We apply color quantization and use the luminance to obtain the intensity values. An improved edge-detection technique that performs adaptive thresholding is used to capture all possible text components with some non-text components initially. Then, we remove obvious non-text component based on a few heuristics. Further, we classify those components as text for which we successfully obtain two consecutive nearest-neighbors that are aligned in a direction and satisfy certain constraints based on size and inter component distance. Finally, we estimate stroke width and foreground color for each component and those having a fairly uniform value of the same are classified as text. Results on ICDAR 2003 Robust Reading Competition data show that the method is competitive for text-detection. The main advantage of our method is that it is robust to font-size, degraded intensities and complex backgrounds. Also, the use of stroke width and color in this manner for text detection is novel to the best of the authorpsilas knowledge.",
"title": ""
},
{
"docid": "d58f60013b507b286fcfc9f19304fea6",
"text": "The outcome of patients suffering from spondyloarthritis is determined by chronic inflammation and new bone formation leading to ankylosis. The latter process manifests by new cartilage and bone formation leading to joint or spine fusion. This article discusses the main mechanisms of new bone formation in spondyloarthritis. It reviews the key molecules and concepts of new bone formation and ankylosis in animal models of disease and translates these findings to human disease. In addition, proposed biomarkers of new bone formation are evaluated and the translational current and future challenges are discussed with regards to new bone formation in spondyloarthritis.",
"title": ""
}
] |
scidocsrr
|
0bcd00faa87766473fd170cb9cbb508a
|
Integrating StockTwits with sentiment analysis for better prediction of stock price movement
|
[
{
"docid": "51d3f19d0fef40b0be4f0ca94a3e834a",
"text": "Stock trend prediction based on text has gained much attention from researchers in recent years. According to investment theories, investors' behaviors will influence the stock market, and the way people invest their money is based on the history trend and information they hold. On account of this indirectly influential relationship between information of stock and stock trend, stock trend prediction based on text has been done by many researchers. However, due to the serious feature sparse problem in tweets and unreliability of using average sentiment score to indicate one day's sentiment, this work proposed a text-sentiment based stock trend prediction model with a hybrid feature selection method. Instead of applying sentiment analysis to add sentiment related features, this paper uses SentiWordNet to give an additional weight to the selected features. Besides, this work also compares the results with those of other learning algorithms. SVM linear algorithm based on leave-one-out cross validation yields the best performance of 90.34%.",
"title": ""
},
{
"docid": "f3b63508393c1daf0dab6773ac2cee38",
"text": "In this paper we investigate the complex relationship between tweet board literature (like bullishness, volume, agreement etc) with the financial market instruments (like volatility, trading volume and stock prices). We have analyzed sentiments for more than 4 million tweets between June 2010 to July 2011 for DJIA, NASDAQ-100 and 13 other big cap technological stocks. Our results show high correlation (upto 0.88 for returns) between stock prices and twitter sentiments. Further, using Granger’s Causality Analysis, we have validated that the movement of stock prices and indices are greatly affected in the short term by Twitter discussions. Finally, we have implemented Expert Model Mining System (EMMS) to demonstrate that our forecasted returns give a high value of Rsquare (0.952) with low Maximum Absolute Percentage Error (MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). Keywords-Stock market ; sentiment analysis ; Twitter ; microblogging ; social network analysis",
"title": ""
}
] |
[
{
"docid": "357bf4403684149577f7110810046a94",
"text": "With the development of high-speed integrated circuit, the Ultra Wideband (UWB) communication system has been developed toward the direction of miniaturization, integration, which will necessarily promote UWB antenna also be developed toward the direction of miniaturization, integration, etc. This paper proposes an improved structure of Vivaldi antenna, which loads resistance at the bottom of exponential type antenna to improve Voltage Standing Wave Ratio (VSWR) at low frequency, and which opens three symmetrical unequal rectangular slots in the antenna radiation part to increase the gain. The improved Vivaldi antenna size is 150 mm * 150 mm, and the working frequency is 0.8-3.8 GHz (measured VSWR<;2).The experimental results show that the antenna has good directional radiation characteristics within the scope of bandwidth.",
"title": ""
},
{
"docid": "67443edf18a5a34ce32389b146244856",
"text": "Abstract:Round Robin Scheduling algorithm is designed especially for time sharing Operating system (OS).It is a preemptive CPU scheduling algorithm which switches between the processes when static time Quantum expires. The Round Robin Scheduling algorithm has its disadvantages that is its longer average waiting time, higher context switches, higher turnaround time .In this paper a new algorithm is presented called Average Max Round Robin (AMRR) scheduling algorithm .In this scheduling algorithm the main idea is to adjust the time Quantum dynamically so that (AMRR) perform better performance than simple Round Robin scheduling algorithm. Keywords-Operating System, Round Robin, Average Max Round Robin, Turnaround time, Waiting time, Context Switch.",
"title": ""
},
{
"docid": "5e453defd762bb4ecfae5dcd13182b4a",
"text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.",
"title": ""
},
{
"docid": "28439c317c1b7f94527db6c2e0edcbd0",
"text": "AnswerBus1 is an open-domain question answering system based on sentence level Web information retrieval. It accepts users’ natural-language questions in English, German, French, Spanish, Italian and Portuguese and provides answers in English. Five search engines and directories are used to retrieve Web pages that are relevant to user questions. From the Web pages, AnswerBus extracts sentences that are determined to contain answers. Its current rate of correct answers to TREC-8’s 200 questions is 70.5% with the average response time to the questions being seven seconds. The performance of AnswerBus in terms of accuracy and response time is better than other similar systems.",
"title": ""
},
{
"docid": "8b947250873921478dd7798c47314979",
"text": "In this letter, an ultra-wideband (UWB) bandpass filter (BPF) using stepped-impedance stub-loaded resonator (SISLR) is presented. Characterized by theoretical analysis, the proposed SISLR is found to have the advantage of providing more degrees of freedom to adjust the resonant frequencies. Besides, two transmission zeros can be created at both lower and upper sides of the passband. Benefiting from these features, a UWB BPF is then investigated by incorporating this SISLR and two aperture-backed interdigital coupled-lines. Finally, this filter is built and tested. The simulated and measured results are in good agreement with each other, showing good wideband filtering performance with sharp rejection skirts outside the passband.",
"title": ""
},
{
"docid": "df0381c129339b1131897708fc00a96c",
"text": "We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.",
"title": ""
},
{
"docid": "327bdee6cd94def49456bdd50a207836",
"text": "A new model for perceptual evaluation of speech quality (PESQ) was recently standardised by the ITU-T as recommendation P.862. Unlike previous codec assessment models, such as PSQM and MNB (ITU-T P.861), PESQ is able to predict subjective quality with good correlation in a very wide range of conditions, that may include coding distortions, errors, noise, filtering, delay and variable delay. This paper introduces time delay identification techniques, and outlines some causes of variable delay, before describing the processes that are integrated into PESQ and specified in P.862. More information on the structure of PESQ, and performance results, can be found in the accompanying paper on the PESQ psychoacoustic model.",
"title": ""
},
{
"docid": "226d8e68f0519ddfc9e288c9151b65f0",
"text": "Vector space embeddings can be used as a tool for learning semantic relationships from unstructured text documents. Among others, earlier work has shown how in a vector space of entities (e.g. different movies) fine-grained semantic relationships can be identified with directions (e.g. more violent than). In this paper, we use stacked denoising auto-encoders to obtain a sequence of entity embeddings that model increasingly abstract relationships. After identifying directions that model salient properties of entities in each of these vector spaces, we induce symbolic rules that relate specific properties to more general ones. We provide illustrative examples to demonstrate the potential of this ap-",
"title": ""
},
{
"docid": "1242b663aa025f7041d4dda527f9de56",
"text": "Automatic forecasting of time series data is a challenging problem in many industries. Current forecast models adopted by businesses do not provide adequate means for including data representing external factors that may have a significant impact on the time series, such as weather, national events, local events, social media trends, promotions, etc. This paper introduces a novel neural network attention mechanism that naturally incorporates data from multiple external sources without the feature engineering needed to get other techniques to work. We demonstrate empirically that the proposed model achieves superior performance for predicting the demand of 20 commodities across 107 stores of one of America’s largest retailers when compared to other baseline models, including neural networks, linear models, certain kernel methods, Bayesian regression, and decision trees. Our method ultimately accounts for a 23.9% relative improvement as a result of the incorporation of external data sources, and provides an unprecedented level of descriptive ability for a neural network forecasting model.",
"title": ""
},
{
"docid": "f698eb36fb75c6eae220cf02e41bdc44",
"text": "In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "01e5485dc7801f2497a03a6666970e03",
"text": "KinectFusion is a method for real-time capture of dense 3D geometry of the physical environment using a depth sensor. The system allows capture of a large dataset of 3D scene reconstructions at very low cost. In this paper we discuss the properties of the generated data and evaluate in which situations the method is accurate enough to provide ground truth models for low-level image processing tasks like stereo and optical flow estimation. The results suggest that the method is suitable for the fast acquisition of medium scale scenes (a few meters across), filling a gap between structured light and LiDAR scanners. For these scenes e.g. ground truth optical flow fields with accuracies of approximately 0.1 pixel can be created. We reveal an initial, high-quality dataset consisting of 57 scenes which can be used by researchers today, as well as a new, interactive tool implementing the KinectFusion method. Such datasets can then also be used as training data, e.g. for 3D recognition and depth inpainting.",
"title": ""
},
{
"docid": "43a7e7241f1ce7967cee750eb481ca2b",
"text": "This paper proposes and analyzes the performance of the multihop free-space optical (FSO) communication links using a heterodyne differential phase-shift keying modulation scheme operating over a turbulence induced fading channel. A novel statistical fading channel model for multihop FSO systems using channel-state-information-assisted and fixed-gain relays is developed incorporating the atmospheric turbulence, pointing errors, and path-loss effects. The closed-form expressions for the moment generating function, probability density function, and cumulative distribution function of the multihop FSO channel are derived using Meijer's G-function. They are then used to derive the fundamental limits of the outage probability and average symbol error rate. Results confirm the performance loss as a function of the number of hops. Effects of the turbulence strength varying from weak-to-moderate and moderate-to-strong turbulence, geometric loss, and pointing errors are studied. The pointing errors can be mitigated by widening the beam at the expense of the received power level, whereas narrowing the beam can reduce the geometric loss at the cost of increased misalignment effects.",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "a0279756831dcba1dc1dee634e1d7e8b",
"text": "Join order selection plays a significant role in query performance. Many modern database engine query optimizers use join order enumerators, cost models, and cardinality estimators to choose join orderings, each of which is based on painstakingly hand-tuned heuristics and formulae. Additionally, these systems typically employ static algorithms that ignore the end result (they do not “learn from their mistakes”). In this paper, we argue that existing deep reinforcement learning techniques can be applied to query planning. These techniques can automatically tune themselves, alleviating a massive human effort. Further, deep reinforcement learning techniques naturally take advantage of feedback, learning from their successes and failures. Towards this goal, we present ReJOIN, a proof-of-concept join enumerator. We show preliminary results indicating that ReJOIN can match or outperform the Postgres optimizer.",
"title": ""
},
{
"docid": "aef25b8bc64bb624fb22ce39ad7cad89",
"text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"title": ""
},
{
"docid": "234bbff2601ecd555997821fb00e30fb",
"text": "We establish analytically the interactions of electromagnetic wave with a general class of spherical cloaks based on a full wave Mie scattering model. We show that for an ideal cloak the total scattering cross section is absolutely zero, but for a cloak with a specific type of loss, only the backscattering is exactly zero, which indicates the cloak can still be rendered invisible with a monostatic (transmitter and receiver in the same location) detection. Furthermore, we show that for a cloak with imperfect parameters the bistatic (transmitter and receiver in different locations) scattering performance is more sensitive to eta(t)=square root micro(t)/epsilon(t) than n(t)=square root micro(t)epsilon(t).",
"title": ""
},
{
"docid": "08faae46f98a8eab45049c9d3d7aa48e",
"text": "One of the assumptions of attachment theory is that individual differences in adult attachment styles emerge from individuals' developmental histories. To examine this assumption empirically, the authors report data from an age 18 follow-up (Booth-LaForce & Roisman, 2012) of the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, a longitudinal investigation that tracked a cohort of children and their parents from birth to age 15. Analyses indicate that individual differences in adult attachment can be traced to variations in the quality of individuals' caregiving environments, their emerging social competence, and the quality of their best friendship. Analyses also indicate that assessments of temperament and most of the specific genetic polymorphisms thus far examined in the literature on genetic correlates of attachment styles are essentially uncorrelated with adult attachment, with the exception of a polymorphism in the serotonin receptor gene (HTR2A rs6313), which modestly predicted higher attachment anxiety and which revealed a Gene × Environment interaction such that changes in maternal sensitivity across time predicted attachment-related avoidance. The implications of these data for contemporary perspectives and debates concerning adult attachment theory are discussed.",
"title": ""
},
{
"docid": "d6abc85e62c28755ed6118257d9c25c3",
"text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.",
"title": ""
},
{
"docid": "d16e579aadf2e9c871c76a201fa5cc29",
"text": "Worldwide, buildings account for ca. 40% of the total energy consumption and ca. 20% of the total CO2 emissions. While most of the energy goes into primary building use, a significant amount of energy is wasted due to malfunctioning building system equipment and wrongly configured Building Management Systems (BMS). For example, wrongly configured setpoints or building equipment, or misplaced sensors and actuators, can contribute to deviations of the real energy consumption from the predicted one. Our paper is motivated by these posed challenges and aims at pinpointing the types of problems in the BMS components that can affect the energy efficiency of a building, as well as review the methods that can be utilized for their discovery and diagnosis. The goal of the paper is to highlight the challenges that lie in this problem domain, as well as provide a strategy how to counterfeit them.",
"title": ""
}
] |
scidocsrr
|
8506cef3444a3ec0076b5956d62bfa3e
|
Evaluating Visual Aesthetics in Photographic Portraiture
|
[
{
"docid": "c8977fe68b265b735ad4261f5fe1ec25",
"text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.",
"title": ""
},
{
"docid": "44ff9580f0ad6321827cf3f391a61151",
"text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.",
"title": ""
}
] |
[
{
"docid": "2f7990443281ed98189abb65a23b0838",
"text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the",
"title": ""
},
{
"docid": "d93795318775df2c451eaf8c04a764cf",
"text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.",
"title": ""
},
{
"docid": "7d3642cc1714951ccd9ec1928a340d81",
"text": "Electrical fuse (eFUSE) has become a popular choice to enable memory redundancy, chip identification and authentication, analog device trimming, and other applications. We will review the evolution and applications of electrical fuse solutions for 180 nm to 45 nm technologies at IBM, and provide some insight into future uses in 32 nm technology and beyond with the eFUSE as a building block for the autonomic chip of the future.",
"title": ""
},
{
"docid": "43ec6774e1352443f41faf8d3780059b",
"text": "Cloud computing is currently one of the most hyped information technology fields and it has become one of the fastest growing segments of IT. Cloud computing allows us to scale our servers in magnitude and availability in order to provide services to a greater number of end users. Moreover, adopters of the cloud service model are charged based on a pay-per-use basis of the cloud's server and network resources, aka utility computing. With this model, a conventional DDoS attack on server and network resources is transformed in a cloud environment to a new breed of attack that targets the cloud adopter's economic resource, namely Economic Denial of Sustainability attack (EDoS). In this paper, we advocate a novel solution, named EDoS-Shield, to mitigate the Economic Denial of Sustainability (EDoS) attack in the cloud computing systems. We design a discrete simulation experiment to evaluate its performance and the results show that it is a promising solution to mitigate the EDoS.",
"title": ""
},
{
"docid": "f8082d18f73bee4938ab81633ff02391",
"text": "Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e94cc8dbf257878ea9b78eceb990cb3b",
"text": "The past two decades have seen extensive growth of sexual selection research. Theoretical and empirical work has clarified many components of pre- and postcopulatory sexual selection, such as aggressive competition, mate choice, sperm utilization and sexual conflict. Genetic mechanisms of mate choice evolution have been less amenable to empirical testing, but molecular genetic analyses can now be used for incisive experimentation. Here, we highlight some of the currently debated areas in pre- and postcopulatory sexual selection. We identify where new techniques can help estimate the relative roles of the various selection mechanisms that might work together in the evolution of mating preferences and attractive traits, and in sperm-egg interactions.",
"title": ""
},
{
"docid": "c6260516384c43d561610f52ff56aa25",
"text": "Successful monetization of user-generated-content (UGC) business calls for attracting enough users, and the right users. The defining characteristic of UGC is users are also content contributors. In this study, we analyze the impact of a UGC firm’s quality control decision on user community composition. We model two UGC firms in competition, with one permitting only high quality content while the other not controlling quality. Users differ in their valuations and the content quality they contribute. Through analyzing various equilibrium situations, we find that higher reward value generally benefits the firm without quality control. However, when the intrinsic value of contribution is low, higher reward value may surprisingly drive high valuation users away from that firm. Also somewhat interestingly, we find that higher cost of contribution may benefit the firm that does not control quality. Our work is among the first to study the business impact of quality control of UGC.",
"title": ""
},
{
"docid": "0bd0af757a365de97db204e8c5b377ca",
"text": "Mobile communications are used by more than two thirds of the world population who expect security and privacy guarantees. The 3rd Generation Partnership Project (3GPP) responsible for the worldwide standardization of mobile communication has designed and mandated the use of the AKA protocol to protect the subscribers’ mobile services. Even though privacy was a requirement, numerous subscriber location attacks have been demonstrated against AKA, some of which have been fixed or mitigated in the enhanced AKA protocol designed for 5G. In this paper, we reveal a new privacy attack against all variants of the AKA protocol, including 5G AKA, that breaches subscriber privacy more severely than known location privacy attacks do. Our attack exploits a new logical vulnerability we uncovered that would require dedicated fixes. We demonstrate the practical feasibility of our attack using low cost and widely available setups. Finally we conduct a security analysis of the vulnerability and discuss countermeasures to remedy our attack.",
"title": ""
},
{
"docid": "002bd283bd76ac47f39ea001877b4402",
"text": "Low-Power Wide-Area Network (LPWAN) heralds a promising class of technology to overcome the range limits and scalability challenges in traditional wireless sensor networks. Recently proposed Sensor Network over White Spaces (SNOW) technology is particularly attractive due to the availability and advantages of TV spectrum in long-range communication. This paper proposes a new design of SNOW that is asynchronous, reliable, and robust. It represents the first highly scalable LPWAN over TV white spaces to support reliable, asynchronous, bi-directional, and concurrent communication between numerous sensors and a base station. This is achieved through a set of novel techniques. This new design of SNOW has an OFDM based physical layer that adopts robust modulation scheme and allows the base station using a single antenna-radio (1) to send different data to different nodes concurrently and (2) to receive concurrent transmissions made by the sensor nodes asynchronously. It has a lightweight MAC protocol that (1) efficiently implements per-transmission acknowledgments of the asynchronous transmissions by exploiting the adopted OFDM design; (2) combines CSMA/CA and location-aware spectrum allocation for mitigating hidden terminal effects, thus enhancing the flexibility of the nodes in transmitting asynchronously. Hardware experiments through deployments in three radio environments - in a large metropolitan city, in a rural area, and in an indoor environment - as well as large-scale simulations demonstrated that the new SNOW design drastically outperforms other LPWAN technologies in terms of scalability, energy, and latency.",
"title": ""
},
{
"docid": "805ea1349c046008a5efd67382ff82aa",
"text": "Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multiagent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents.",
"title": ""
},
{
"docid": "c5f1d5fc5c5161bc9795cdc0362b8ca7",
"text": "Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.",
"title": ""
},
{
"docid": "8f9e3bb85b4a2fcff3374fd700ac3261",
"text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.",
"title": ""
},
{
"docid": "b858f8c81a282fbb1444ee813f47797a",
"text": "In conventional neural networks (NN) based parametric text-tospeech (TTS) synthesis frameworks, text analysis and acoustic modeling are typically processed separately, leading to some limitations. On one hand, much significant human expertise is normally required in text analysis, which presents a laborious task for researchers; on the other hand, training of the NN-based acoustic models still relies on the hidden Markov model (HMM) to obtain frame-level alignments. This acquisition process normally goes through multiple complicated stages. The complex pipeline makes constructing a NN-based parametric TTS system a challenging task. This paper attempts to bypass these limitations using a novel end-to-end parametric TTS synthesis framework, i.e. the text analysis and acoustic modeling are integrated together employing an attention-based recurrent neural network. Thus the alignments can be learned automatically. Preliminary experimental results show that the proposed system can generate moderately smooth spectral parameters and synthesize fairly intelligible speech on short utterances (less than 8 Chinese characters).",
"title": ""
},
{
"docid": "59f2822d69ffb59fafabefa16c57f6c3",
"text": "Timely and accurate detection of anomalies in massive data streams have important applications in preventing machine failures, intrusion detection, and dynamic load balancing, etc. In this paper, we introduce a new anomaly detection algorithm, which can detect anomalies in a streaming fashion by making only one pass over the data while utilizing limited storage. The algorithm uses ideas from matrix sketching and randomized low-rank matrix approximations to maintain an approximate low-rank orthogonal basis of the data in a streaming model. Using this constructed orthogonal basis, anomalies in new incoming data are detected based on a simple reconstruction error test. We theoretically prove that our algorithm compares favorably with an offline approach based on global singular value decomposition updates. The experimental results show the effectiveness and efficiency of our approach over other popular fast anomaly detection methods.",
"title": ""
},
{
"docid": "2d8baa9a78e5e20fd20ace55724e2aec",
"text": "To determine the relationship between fatigue and post-activation potentiation, we examined the effects of sub-maximal continuous running on neuromuscular function tests, as well as on the squat jump and counter movement jump in endurance athletes. The height of the squat jump and counter movement jump and the estimate of the fast twitch fiber recruiting capabilities were assessed in seven male middle distance runners before and after 40 min of continuous running at an intensity corresponding to the individual lactate threshold. The same test was then repeated after three weeks of specific aerobic training. Since the three variables were strongly correlated, only the estimate of the fast twitch fiber was considered for the results. The subjects showed a significant improvement in the fast twitch fiber recruitment percentage after the 40 min run. Our data show that submaximal physical exercise determined a change in fast twitch muscle fiber recruitment patterns observed when subjects performed vertical jumps; however, this recruitment capacity was proportional to the subjects' individual fast twitch muscle fiber profiles measured before the 40 min run. The results of the jump tests did not change significantly after the three-week training period. These results suggest that pre-fatigue methods, through sub-maximal exercises, could be used to take advantage of explosive capacity in middle-distance runners.",
"title": ""
},
{
"docid": "08084de7a702b87bd8ffc1d36dbf67ea",
"text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.",
"title": ""
},
{
"docid": "adc9e237e2ca2467a85f54011b688378",
"text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.",
"title": ""
},
{
"docid": "a497cb84141c7db35cd9a835b11f33d2",
"text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).",
"title": ""
},
{
"docid": "8255146164ff42f8755d8e74fd24cfa1",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
},
{
"docid": "6d0aba91efbe627d8d98c7f49c34fe3d",
"text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.",
"title": ""
}
] |
scidocsrr
|
bae7de726c9d85cc3191dc97a6097b51
|
Supervised Hashing for Image Retrieval via Image Representation Learning
|
[
{
"docid": "f70ff7f71ff2424fbcfea69d63a19de0",
"text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "98cc792a4fdc23819c877634489d7298",
"text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"title": ""
}
] |
[
{
"docid": "d405fc2bcbdc8f65584b7977b2442d56",
"text": "Financial Industry Studies is published by the Federal Reserve Bank of Dallas. The views expressed are those of the authors and should not be attributed to the Federal Reserve Bank of Dallas or the Federal Reserve System. Articles may be reprinted on the condition that the source is credited and a copy of the publication containing the reprinted article is provided to the Financial Industry Studies Department of the Federal Reserve Bank of Dallas.",
"title": ""
},
{
"docid": "218ef054603cf5955015946f0606a614",
"text": "The purpose of this work was to obtain a componentwise breakdown of the power consumption a modern laptop. We measured the power usage of the key components in an IBM ThinkPad R40 laptop using an Agilent Oscilloscope and current probes. We obtained the power consumption for the CPU, optical drive, hard disk, display, graphics card, memory, and wireless card subsystems--either through direct measurement or subtractive measurement and calculation. Moreover, we measured the power consumption of each component for a variety of workloads. We found that total system power consumption varies a lot (8 W to 30 W) depending on the workload, and moreover that the distribution of power consumption among the components varies even more widely. We also found that though power saving techniques such as DVS can reduce CPU power considerably, the total system power is still dominated by CPU power in the case of CPU intensive workloads. The display is the other main source of power consumption in a laptop; it dominates when the CPU is idle. We also found that reducing the backlight brightness can reduce the system power significantly, more than any other display power saving techniques. Finally, we observed OS differences in the power consumption.",
"title": ""
},
{
"docid": "bae76f4857e39619f975f3db687d6223",
"text": "Athletes in any sports can greatly benefit from feedback systems for improving the quality of their training. In this paper, we present a golf swing training system which incorporates wearable motion sensors to obtain inertial information and provide feedback on the quality of movements. The sensors are placed on a golf club and athlete’s body at positions which capture the unique movements of a golf swing. We introduce a quantitative model which takes into consideration signal processing techniques on the collected data and quantifies the correctness of the performed actions. We evaluate the effectiveness of our framework on data obtained from four subjects and discuss ongoing research.",
"title": ""
},
{
"docid": "2c57323734ae69cbfa764f4f8e579489",
"text": "The increasing number of cameras, their availability to the end user and the social media platforms gave rise to the massive repositories of today’s Big Data. The largest portion of this data corresponds to unstructured image and video collections. This fact motivates the development of algorithms that would help efficient management and organization of the Big Data. This processing usually involves high level Computer Vision tasks such as object detection and recognition whose accuracy and complexity are therefore crucial. Salient object detection, which can be defined as highlighting the regions that visually stand out from the rest of the environment, can both reduce the complexity and improve the accuracy of object detection and recognition. Thus, recently there has been a growing interest in this topic. This interest is also due to many other applications of salient object detection such as media compression and summarization. This thesis focuses on this crucial problem and presents novel approaches and methods for salient object detection in digital media, using the principles of Quantum Mechanics. The contributions of this thesis can be categorized chronologically into three parts. First part is constituted of a direct application of ideas originally proposed for describing the wave nature of particles in Quantum Mechanics and expressed through Schrödinger’s Equation, to salient object detection in images. The significance of this contribution is the fact that, to the best of our knowledge, this is the first study that proposes a realizable quantum mechanical system for salient object proposals yielding an instantaneous speed in a possible physical implementation in the quantum scale. The second and main contribution of this thesis, is a spectral graph based salient object detection method, namely Quantum-Cuts. Despite the success of spectral graph based methods in many Computer Vision tasks, traditional approaches on applications of spectral graph partitioning methods offer little for the salient object detection problem which can be mapped as a foreground segmentation problem using graphs. Thus, Quantum-Cuts adopts a novel approach to spectral graph partitioning by integrating quantum mechanical concepts to Spectral Graph Theory. In particular, the probabilistic interpretation of quantum mechanical wave-functions and the unary potential fields in Quantum Mechanics when combined with the pairwise graph affinities that are widely used in Spectral Graph Theory, results into a unique optimization problem that formulates salient object detection. The optimal solution of a relaxed version of this problem is obtained via Quantum-Cuts and is proven to efficiently represent salient object regions in images. The third part of the contributions cover improvements on Quantum-Cuts by analyzing the main factors that affect its performance in salient object detection. Particularly, both unsupervised and supervised approaches are adopted in improving the exploited graph representation. The extensions on Quantum-Cuts led to computationally efficient algorithms that perform superior to the state-of-the-art in salient object detection.",
"title": ""
},
{
"docid": "aa9bfea9c679cfef5c3ad6d810873578",
"text": "-The paper deals with moment invariants, which are invariant under general affine transformation and may be used for recognition of affine-deformed objects. Our approach is based on the theory of algebraic invariants. The invariants from secondand third-order moments are derived and shown to be complete. The paper is a significant extension and generalization of recent works. Several numerical experiments dealing with pattern recognition by means of the affine moment invariants as the features are described. Feature extraction Affine transform Algebraic invariants Moment invariants Pattern recognition Image matching I. I N T R O D U C T I O N A feature-based recognition of objects or patterns independent of their position, size, orientation and other variations has been the goal of much recent research. Finding efficient invariant features is the key to solving this problem. There have been several kinds of features used for recognition. These may be divided into four groups as follows: (1) visual features (edges, textures and contours); (2) transform coefficient features (Fourier descriptors, ~''2~ Hadamard coefficients; ~3~ (3) algebraic features (based on matrix decomposition of image, see reference (4) for details); and (4) statistical features (moment invariants). In this paper, attention is paid to statistical features. Moment invariants are very useful tools for pattern recognition. They were derived by Hu tsl and they were successfully used in aircraft identification, ~61 remotely sensed data matching ~7~ and character recognition. ~s~ Further studies were made by Maitra ~m and Hsia \"°~ in order to reach higher reliability. Several effective algorithms for fast computat ion of moment invariants were recently described in references (11-13). All the above-mentioned features are invariant only under translation, rotation and scaling of the object. In this paper, our aim is to find features which are invariant under general affine transformations and which may be used for recognition of affine-deformed objects. Our approach is based on the theory of algebraic invariants. \"4~ The first attempt to find affine invariants in this way was made by Hu, ~s~ but his affine moment invariants were derived incorrectly. Several correct affine moment invariants are derived in Section 2, and their use for object recognition and scene matching is experimentally proved in Section 3. 2. A F F I N E M O M E N T INVARIANTS The affine moment invariants are derived by means of the theory of algebraic invariants. They are invariant under general affine transformation u = a 0 + a ] x + a2), v = bo + b l x + b2y. (1) The general two-dimensional (p + q)th order moments of a density distribution function p ( x , y ) are defined as: mpq = fS xPyqP(X'y) dx dy p,q = 0, !, 2 . . . . (2) For simplicity we deal only with binary objects in this paper, then p is a characteristic function of object G, and mpq=S~xPk ,qdxd) , , p,q =0 , 1,2 . . . . 13) G It is possible to generalize all the following relations and results for grey-level objects. The affine transformation (1) can be decomposed into six one-parameter transformations: 1. u = x + 7 2. u = x 3. u = ~ ' x v = y v = y + / ~ v = co.y 4. u = f i ' x 5. u = x + t ' y 6. u = x v = y t , = y v = t \" x + y. Any function F of moments which is invariant under these six transformations will be invariant under the general affine transformation (1). From the requirement of invariantness under these transformations we can derive the type and parameters of the function F. If we use central moments instead of general moments (2) or (3), any function of them will be invariant",
"title": ""
},
{
"docid": "bc70137062d6e9739b0956e806fb85c9",
"text": "Energy disaggregation or NILM is the best solution to reduce our consumption of electricity. Many algorithms in machine learning are applied to this field. However, the classification results from those algorithms are not as well as expected. In this paper, we propose a new approach to construct a classifier for energy disaggregation with deep learning field. We apply Gated Recurrent Unit (GRU) based on Recurrent Neural Network (RNN) to train our model using UK DALE dataset on this field. Besides, we compare our approach to original RNN on energy disaggregation. By applying GRU RRN, we achieve accuracy and F-measure for energy disaggregation with the ranges [89%–98%] and [81%–98%] respectively. Through these results of the experiment, we confirm that the deep learning approach is really effective for NILM.",
"title": ""
},
{
"docid": "7eb278200f80d5827b94cada79e54ac2",
"text": "Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster–Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.",
"title": ""
},
{
"docid": "3150741173abdb725a4d35ded866b2e3",
"text": "BACKGROUND AND PURPOSE\nAcute-onset dysphagia after stroke is frequently associated with an increased risk of aspiration pneumonia. Because most screening tools are complex and biased toward fluid swallowing, we developed a simple, stepwise bedside screen that allows a graded rating with separate evaluations for nonfluid and fluid nutrition starting with nonfluid textures. The Gugging Swallowing Screen (GUSS) aims at reducing the risk of aspiration during the test to a minimum; it assesses the severity of aspiration risk and recommends a special diet accordingly.\n\n\nMETHODS\nFifty acute-stroke patients were assessed prospectively. The validity of the GUSS was established by fiberoptic endoscopic evaluation of swallowing. For interrater reliability, 2 independent therapists evaluated 20 patients within a 2-hour period. For external validity, another group of 30 patients was tested by stroke nurses. For content validity, the liquid score of the fiberoptic endoscopic evaluation of swallowing was compared with the semisolid score.\n\n\nRESULTS\nInterrater reliability yielded excellent agreement between both raters (kappa=0.835, P<0.001). In both groups, GUSS predicted aspiration risk well (area under the curve=0.77; 95% CI, 0.53 to 1.02 in the 20-patient sample; area under the curve=0.933; 95% CI, 0.833 to 1.033 in the 30-patient sample). The cutoff value of 14 points resulted in 100% sensitivity, 50% specificity, and a negative predictive value of 100% in the 20-patient sample and of 100%, 69%, and 100%, respectively, in the 30-patient sample. Content validity showed a significantly higher aspiration risk with liquids compared with semisolid textures (P=0.001), therefore confirming the subtest sequence of GUSS.\n\n\nCONCLUSIONS\nThe GUSS offers a quick and reliable method to identify stroke patients with dysphagia and aspiration risk. Such a graded assessment considers the pathophysiology of voluntary swallowing in a more differentiated fashion and provides less discomfort for those patients who can continue with their oral feeding routine for semisolid food while refraining from drinking fluids.",
"title": ""
},
{
"docid": "61b02ae1994637115e3baec128f05bd8",
"text": "Ensuring reliability as the electrical grid morphs into the “smart grid” will require innovations in how we assess the state of the grid, for the purpose of proactive maintenance, rather than reactive maintenance – in the future, we will not only react to failures, but also try to anticipate and avoid them using predictive modeling (machine learning) techniques. To help in meeting this challenge, we present the Neutral Online Visualization-aided Autonomic evaluation framework (NOVA) for evaluating machine learning algorithms for preventive maintenance on the electrical grid. NOVA has three stages provided through a unified user interface: evaluation of input data quality, evaluation of machine learning results, and evaluation of the reliability improvement of the power grid. A prototype version of NOVA has been deployed for the power grid in New York City, and it is able to evaluate machine learning systems effectively and efficiently. Appearing in the ICML 2011 Workshop on Machine Learning for Global Challenges, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).",
"title": ""
},
{
"docid": "6fdd0c7d239417234cfc4706a82b5a0f",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "a4e9d39a3ab7339e40958ad6df97adac",
"text": "Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "4afdb551efb88711ffe3564763c3806a",
"text": "This article applied GARCH model instead AR or ARMA model to compare with the standard BP and SVM in forecasting of the four international including two Asian stock markets indices.These models were evaluated on five performance metrics or criteria. Our experimental results showed the superiority of SVM and GARCH models, compared to the standard BP in forecasting of the four international stock markets indices.",
"title": ""
},
{
"docid": "f7792dbc29356711c2170d5140030142",
"text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.",
"title": ""
},
{
"docid": "22650cb6c1470a076fc1dda7779606ec",
"text": "This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "a27a05cb00d350f9021b5c4f609d772c",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "ab1b4a5694e17772b01a2156afc08f55",
"text": "Clunealgia is caused by neuropathy of inferior cluneal branches of the posterior femoral cutaneous nerve resulting in pain in the inferior gluteal region. Image-guided anesthetic nerve injections are a viable and safe therapeutic option in sensory peripheral neuropathies that provides significant pain relief when conservative therapy fails and surgery is not desired or contemplated. The authors describe two cases of clunealgia, where computed-tomography-guided technique for nerve blocks of the posterior femoral cutaneous nerve and its branches was used as a cheaper, more convenient, and faster alternative with similar face validity as the previously described magnetic-resonance-guided injection.",
"title": ""
},
{
"docid": "0cccb226bb72be281ead8c614bd46293",
"text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.",
"title": ""
},
{
"docid": "b15a12d0421227b01047dbe962070aae",
"text": "This paper investigates the behaviour of small and medium sized enterprises (SMEs) within the heritage tourism supply chain (HTSC), in two emerging heritage regions. SMEs are conceptualised as implementers, working within the constraints of government level tourism structures and the heritage tourism supply chain. The research employs a case study approach, focusing on two emerging regions in Northern Ireland. In-depth interviews were carried out with small business owners and community associations operating within the regions. The research identifies SME dissatisfaction with the supply chain and the processes in place for the delivery of the tourism product. To overcome the perceived inadequacies of the heritage tourism supply chain SMEs engage in entrepreneurial behaviour by attempting to deliver specific products and services to meet the need of tourists. The challenge for tourism organisations is how they can integrate the entrepreneurial, innovative activities of SMEs into the heritage tourism system. © 2016 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "9b5b10031ab67dfd664993f727f1bce8",
"text": "PURPOSE\nWe propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image.\n\n\nMETHODS\nWe simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of \"convolution\" and \"deconvolution\" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment.\n\n\nRESULTS\nThe proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth.\n\n\nCONCLUSIONS\nWe propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.",
"title": ""
},
{
"docid": "93dd0ad4eb100d4124452e2f6626371d",
"text": "The role of background music in audience responses to commercials (and other marketing elements) has received increasing attention in recent years. This article extends the discussion of music’s influence in two ways: (1) by using music theory to analyze and investigate the effects of music’s structural profiles on consumers’ moods and emotions and (2) by examining the relationship between music’s evoked moods that are congruent versus incongruent with the purchase occasion and the resulting effect on purchase intentions. The study reported provides empirical support for the notion that when music is used to evoke emotions congruent with the symbolic meaning of product purchase, the likelihood of purchasing is enhanced. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
89bbac6444839551489f4f55a66d03bc
|
Impact of Frequent Interruption on Nurses' Patient-Controlled Analgesia Programming Performance
|
[
{
"docid": "37fcf6201c168e87d6ef218ecb71c211",
"text": "NASA-TLX is a multi-dimensional scale designed to obtain workload estimates from one or more operators while they are performing a task or immediately afterwards. The years of research that preceded subscale selection and the weighted averaging approach resulted in a tool that has proven to be reasonably easy to use and reliably sensitive to experimentally important manipulations over the past 20 years. Its use has spread far beyond its original application (aviation), focus (crew complement), and language (English). This survey of 550 studies in which NASA-TLX was used or reviewed was undertaken to provide a resource for a new generation of users. The goal was to summarize the environments in which it has been applied, the types of activities the raters performed, other variables that were measured that did (or did not) covary, methodological issues, and lessons learned",
"title": ""
},
{
"docid": "c9dd964f5421171d4302d1b159c2b415",
"text": "The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload. .",
"title": ""
}
] |
[
{
"docid": "91c5ad5a327026a424454779f96da601",
"text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.",
"title": ""
},
{
"docid": "3bf3546e686763259b953b31674e3cdc",
"text": "In this paper, we concentrate on the automatic recognition of Egyptian Arabic speech using syllables. Arabic spoken digits were described by showing their constructing phonemes, triphones, syllables and words. Speaker-independent hidden markov models (HMMs)-based speech recognition system was designed using Hidden markov model toolkit (HTK). The database used for both training and testing consists from forty-four Egyptian speakers. Experiments show that the recognition rate using syllables outperformed the rate obtained using monophones, triphones and words by 2.68%, 1.19% and 1.79% respectively. A syllable unit spans a longer time frame, typically three phones, thereby offering a more parsimonious framework for modeling pronunciation variation in spontaneous speech. Moreover, syllable-based recognition has relatively smaller number of used units and runs faster than word-based recognition. Key-Words: Speech recognition, syllables, Arabic language, HMMs.",
"title": ""
},
{
"docid": "e6d260653b0aeed8c9e5124e9025d622",
"text": "Background. Primary ovarian carcinoma with metastasis to the breast is rare, with only 39 cases reported in the current literature. Ovarian metastasis to the breast presenting as inflammatory breast carcinoma is even more infrequent, with only 6 cases reported.Case. We present a patient who developed metastatic inflammatory cancer of the breast from a stage IIIC papillary serous ovarian adenocarcinoma approximately 1 year after the original diagnosis. Pathologic analysis confirmed the origin of the tumor: a high-grade adenocarcinoma morphologically similar to the previously diagnosed ovarian cancer. In addition, the tumor was strongly positive on immunohistochemistry for CA-125, identical to the ovarian primary. The patient died of diffuse metastasis 5 months after the breast tumor was noted.Conclusion. Although ovarian metastasis to the breast presenting as inflammatory breast cancer is rare, it should be included in the differential diagnosis for any patient with a personal history of ovarian cancer. Accurate differentiation is necessary because treatment differs significantly for patients with ovarian metastasis to the breast, as compared with patients with primary inflammatory breast cancer. Ovarian metastasis to the breast confers a poor prognosis: patient survival ranged from 3 to 18 months, with a median survival of 6 months after the diagnosis of the breast metastasis.",
"title": ""
},
{
"docid": "a6c8c5a1cf0e014860e8cd04f38532f3",
"text": "How to train a binary neural network (BinaryNet) with both high compression rate and high accuracy on large scale datasets? We answer this question through a careful analysis of previous work on BinaryNets, in terms of training strategies, regularization, and activation approximation. Our findings first reveal that a low learning rate is highly preferred to avoid frequent sign changes of the weights, which often makes the learning of BinaryNets unstable. Secondly, we propose to use PReLU instead of ReLU in a BinaryNet to conveniently absorb the scale factor for weights to the activation function, which enjoys high computation efficiency for binarized layers while maintains high approximation accuracy. Thirdly, we reveal that instead of imposing L2 regularization, driving all weights to zero which contradicts with the setting of BinaryNets, we introduce a regularization term that encourages the weights to be bipolar. Fourthly, we discover that the failure of binarizing the last layer, which is essential for high compression rate, is due to the improper output range. We propose to use a scale layer to bring it to normal. Last but not least, we propose multiple binarizations to improve the approximation of the activations. The composition of all these enables us to train BinaryNets with both high compression rate and high accuracy, which is strongly supported by our extensive empirical study.",
"title": ""
},
{
"docid": "3acc77360d13c47d16dadc886a34f51e",
"text": "Background: Because of the high-speed development of e-commerce, online group buying has become a new popular pattern of consumption for Chinese consumers. Previous research has studied online group-buying (OGB) purchase intention in some specific areas such as Taiwan, but in mainland China. Purpose: The purpose of this study is to contribute to the Technology Acceptance Model, incorporating other potential driving factors to address how they influence Chinese consumers' online group-buying purchase intentions. Method: The study uses two steps to achieve its purpose. The first step is that I use the focus group interview technique to collect primary data. The results combining the Technology Acceptance model help me propose hypotheses. The second step is that the questionnaire method is applied for empirical data collection. The constructs are validated with exploratory factor analysis and reliability analysis, and then the model is tested with Linear multiple regression. Findings: The results have shown that the adapted research model has been successfully tested in this study. The seven factors (perceived usefulness, perceived ease of use, price, e-trust, Word of Mouth, website quality and perceived risk) have significant effects on Chinese consumers' online group-buying purchase intentions. This study suggests that managers of group-buying websites need to design easy-to-use platform for users. Moreover, group-buying website companies need to propose some rules or regulations to protect consumers' rights. When conflicts occur, evendors can follow these rules to provide solutions that are reasonable and satisfying for consumers.",
"title": ""
},
{
"docid": "64ae19aaf2f6212b01a1dfc251d253d8",
"text": "OBJECTIVE\nTo reevaluate the current criteria for diagnosing allergic fungal sinusitis (AFS) and determine the incidence of AFS in patients with chronic rhinosinusitis (CRS).\n\n\nMETHODS\nThis prospective study evaluated the incidence of AFS in 210 consecutive patients with CRS with or without polyposis, of whom 101 were treated surgically. Collecting and culturing fungi from nasal mucus require special handling, and novel methods are described. Surgical specimen handling emphasizes histologic examination to visualize fungi and eosinophils in the mucin. The value of allergy testing in the diagnosis of AFS is examined.\n\n\nRESULTS\nFungal cultures of nasal secretions were positive in 202 (96%) of 210 consecutive CRS patients. Allergic mucin was found in 97 (96%) of 101 consecutive surgical cases of CRS. Allergic fungal sinusitis was diagnosed in 94 (93%) of 101 consecutive surgical cases with CRS, based on histopathologic findings and culture results. Immunoglobulin E-mediated hypersensitivity to fungal allergens was not evident in the majority of AFS patients.\n\n\nCONCLUSION\nThe data presented indicate that the diagnostic criteria for AFS are present in the majority of patients with CRS with or without polyposis. Since the presence of eosinophils in the allergic mucin, and not a type I hypersensitivity, is likely the common denominator in the pathophysiology of AFS, we propose a change in terminology from AFS to eosinophilic fungal rhinosinusitis.",
"title": ""
},
{
"docid": "3fea90fc8c69bd3bc6416cb296a715fe",
"text": "Although indoor localization is a key topic for mobile computing, it is still very difficult for the mobile sensing community to compare state-of-art localization algorithms due to the scarcity of databases. Thus, a multi-building and multi-floor localization database based on WLAN fingerprinting is presented in this work, being its public access granted for the research community. The here proposed database not only is the biggest database in the literature but it is also the first publicly available database. Among other comprehensively described features, full raw information taken by more than 20 users and by means of 25 devices is provided.",
"title": ""
},
{
"docid": "caa63861eabe7919a14301dfa8321a15",
"text": "As CPU cores become both faster and more numerous, the limiting factor for most programs is now, and will be for some time, memory access. Hardware designers have come up with ever more sophisticated memory handling and acceleration techniques–such as CPU caches–but these cannot work optimally without some help from the programmer. Unfortunately, neither the structure nor the cost of using the memory subsystem of a computer or the caches on CPUs is well understood by most programmers. This paper explains the structure of memory subsystems in use on modern commodity hardware, illustrating why CPU caches were developed, how they work, and what programs should do to achieve optimal performance by utilizing them.",
"title": ""
},
{
"docid": "a5447f6bf7dbbab55d93794b47d46d12",
"text": "The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.",
"title": ""
},
{
"docid": "08c430ba9fe93c226cfba6309db72542",
"text": "Technology constraints have increasingly led to the adoption of specialized coprocessors, i.e. hardware accelerators. The first challenge that computer architects encounter is identifying \"what to specialize in the program\". We demonstrate that this requires precise enumeration of program paths based on dynamic program behavior. We hypothesize that path-based [4] accelerator offloading leads to good coverage of dynamic instructions and improve energy efficiency. Unfortunately, hot paths across programs demonstrate diverse control flow behavior. Accelerators (typically based on dataflow execution), often lack an energy-efficient, complexity effective, and high performance (eg. branch prediction) support for control flow. We have developed NEEDLE, an LLVM based compiler framework that leverages dynamic profile information to identify, merge, and offload acceleratable paths from whole applications. NEEDLE derives insight into what code coverage (and consequently energy reduction) an accelerator can achieve. We also develop a novel program abstraction for offload calledBraid, that merges common code regions across different paths to improve coverage of the accelerator while trading off the increase in dataflow size. This enables coarse grained offloading, reducing interaction with the host CPU core. To prepare the Braids and paths for acceleration, NEEDLE generates software frames. Software frames enable energy efficient speculative execution on accelerators. They are accelerator microarchitecture independent support speculative execution including memory operations. NEEDLE is automated and has been used to analyze 225K paths across 29 workloads. It filtered and ranked 154K paths for acceleration across unmodified SPEC, PARSEC and PERFECT workload suites. We target NEEDLE's offload regions toward a CGRA and demonstrate 34% performance and 20% energy improvement.",
"title": ""
},
{
"docid": "17e087f27a3178e46dbe14fb25027641",
"text": "Social media has become an important tool for the business of marketers. Increasing exposure and traffics are the main two benefits of social media marketing. Most marketers are using social media to develop loyal fans and gain marketplace intelligence. Marketers reported increased benefits across all categories since 2013 and trademarks increased the number of loyal fans and sales [1]. Therefore, 2013 was a significant year for social media. Feeling the power of Instagram may be one of the most interesting cases. Social media is an effective key for fashion brands as they allow them to communicate directly with their consumers, promote various events and initiatives, and build brand awareness. As the increasing use of visual info graphic and marketing practices in social media, trademarks has begun to show more interest in Instagram. There is also no language barriers in Instagram and provides visuals which are very crucial for fashion industry. The purpose of this study is to determine and contrast the content sharing types of 10 well-known fashion brands (5 Turkish brands and 5 international brands), and to explain their attitude in Instagram. Hence, the content of Instagram accounts of those brands were examined according to post type (photo/video), content type (9 elements), number of likes and reviews, photo type (amateur/professional), shooting place (studio/outdoor/shops/etc.), and brand comments on their posts. This study provides a snapshot of how fashion brands utilize Instagram in their efforts of marketing.",
"title": ""
},
{
"docid": "750c67fe63611248e8d8798a42ac282c",
"text": "Chaos and its drive-response synchronization for a fractional-order cellular neural networks (CNN) are studied. It is found that chaos exists in the fractional-order system with six-cell. The phase synchronisation of drive and response chaotic trajectories is investigated after that. These works based on Lyapunov exponents (LE), Lyapunov stability theory and numerical solving fractional-order system in Matlab environment.",
"title": ""
},
{
"docid": "e131e4d4bb59b4d0b513cc7c5dd017f2",
"text": "Although touch is one of the most neglected modalities of communication, several lines of research bear on the important communicative functions served by the modality. The authors highlighted the importance of touch by reviewing and synthesizing the literatures pertaining to the communicative functions served by touch among humans, nonhuman primates, and rats. In humans, the authors focused on the role that touch plays in emotional communication, attachment, bonding, compliance, power, intimacy, hedonics, and liking. In nonhuman primates, the authors examined the relations among touch and status, stress, reconciliation, sexual relations, and attachment. In rats, the authors focused on the role that touch plays in emotion, learning and memory, novelty seeking, stress, and attachment. The authors also highlighted the potential phylogenetic and ontogenetic continuities and discussed suggestions for future research.",
"title": ""
},
{
"docid": "ebfa889f9ba51267823aac9b92b0ee66",
"text": "8 9 10 A Synthetic Aperture Radar (SAR) is an active sensor transmitting pulses of polarized 11 electromagnetic waves and receiving the backscattered radiation. SAR sensors at different 12 wavelengths and with different polarimetric capabilities are being used in remote sensing of 13 the Earth. The value of an analysis of backscattered energy alone is limited due to ambiguities 14 in the possible ecological factor configurations causing the signal. From two SAR images 15 taken from similar viewing positions with a short time-lag, interference between the two 16 waves can be observed. By subtracting the two phases of the signals, it is feasible to eliminate 17 the random contribution of the scatterers to the phase. The interferometric correlation and the 18 interferometric phase contain additional information on the three-dimensional structure of the 19 scattering elements in the imaged area. 20 A brief review of SAR sensors is given, followed by an outline of the physical foundations of 21 SAR interferometry and the practical data processing steps involved. An overview of 22 applications of InSAR to forest mapping and monitoring is given, covering tree bole volume 23 and biomass, forest types and land cover, fire scars, forest thermal state and forest canopy 24",
"title": ""
},
{
"docid": "ae9e21aaf1d2c5af314d4ab9b9266d4c",
"text": "Today's scientific advances in water desalination dramatically increase our ability to transform seawater into fresh water. As an important source of renewable energy, solar power holds great potential to drive the desalination of seawater. Previously, solar assisted evaporation systems usually relied on highly concentrated sunlight or were not suitable to treat seawater or wastewater, severely limiting the large scale application of solar evaporation technology. Thus, a new strategy is urgently required in order to overcome these problems. In this study, we developed a solar thermal evaporation system based on reduced graphene oxide (rGO) decorated with magnetic nanoparticles (MNPs). Because this material can absorb over 95% of sunlight, we achieved high evaporation efficiency up to 70% under only 1 kW m(-2) irradiation. Moreover, it could be separated from seawater under the action of magnetic force by decorated with MNPs. Thus, this system provides an advantage of recyclability, which can significantly reduce the material consumptions. Additionally, by using photoabsorbing bulk or layer materials, the deposition of solutes offen occurs in pores of materials during seawater desalination, leading to the decrease of efficiency. However, this problem can be easily solved by using MNPs, which suggests this system can be used in not only pure water system but also high-salinity wastewater system. This study shows good prospects of graphene-based materials for seawater desalination and high-salinity wastewater treatment.",
"title": ""
},
{
"docid": "91c4658ea032a55bd5c3d7cf1b093446",
"text": "BACKGROUND\nMany of today's treatments associated with 'thinning hair', such as female pattern hair loss and telogen effluvium, are focused on two of the key aspects of the condition. Over-the-counter or prescription medications are often focused on improving scalp hair density while high-quality cosmetic products work to prevent further hair damage and minimize mid-fibre breakage. Fibre diameter is another key contributor to thinning hair, but it is less often the focus of medical or cosmetic treatments.\n\n\nOBJECTIVES\nTo examine the ability of a novel leave-on technology combination [caffeine, niacinamide, panthenol, dimethicone and an acrylate polymer (CNPDA)] to affect the diameter and behaviour of individual terminal scalp hair fibres as a new approach to counteract decreasing fibre diameters.\n\n\nMETHODS\nTesting methodology included fibre diameter measures via laser scan micrometer, assessment of fibre mechanical and behavioural properties via tensile break stress and torsion pendulum testing, and mechanistic studies including cryoscanning electron microscopy and autoradiographic analysis.\n\n\nRESULTS\nCNPDA significantly increased the diameter of individual, existing terminal scalp hair fibres by 2-5 μm, which yields an increase in the cross-sectional area of approximately 10%. Beyond the diameter increase, the CNPDA-thickened fibres demonstrated the altered mechanical properties characteristic of thicker fibres: increased suppleness/pliability (decreased shear modulus) and better ability to withstand force without breaking (increased break stress).\n\n\nCONCLUSIONS\nAlthough cosmetic treatments will not reverse the condition, this new approach may help to mitigate the effects of thinning hair.",
"title": ""
},
{
"docid": "ed39af901c58a8289229550084bc9508",
"text": "Digital elevation maps are simple yet powerful representations of complex 3-D environments. These maps can be built and updated using various sensors and sensorial data processing algorithms. This paper describes a novel approach for modeling the dynamic 3-D driving environment, the particle-based dynamic elevation map, each cell in this map having, in addition to height, a probability distribution of speed in order to correctly describe moving obstacles. The dynamic elevation map is represented by a population of particles, each particle having a position, a height, and a speed. Particles move from one cell to another based on their speed vectors, and they are created, multiplied, or destroyed using an importance resampling mechanism. The importance resampling mechanism is driven by the measurement data provided by a stereovision sensor. The proposed model is highly descriptive for the driving environment, as it can easily provide an estimation of the height, speed, and occupancy of each cell in the grid. The system was proven robust and accurate in real driving scenarios, by comparison with ground truth data.",
"title": ""
},
{
"docid": "cfdc217170410e60fb9323cc39d51aff",
"text": "Malware, i.e., malicious software, represents one of the main cyber security threats today. Over the last decade malware has been evolving in terms of the complexity of malicious software and the diversity of attack vectors. As a result modern malware is characterized by sophisticated obfuscation techniques, which hinder the classical static analysis approach. Furthermore, the increased amount of malware that emerges every day, renders a manual approach inefficient. This study tackles the problem of analyzing, detecting and classifying the vast amount of malware in a scalable, efficient and accurate manner. We propose a novel approach for detecting malware and classifying it to either known or novel, i.e., previously unseen malware family. The approach relies on Random Forests classifier for performing both malware detection and family classification. Furthermore, the proposed approach employs novel feature representations for malware classification, that significantly reduces the feature space, while achieving encouraging predictive performance. The approach was evaluated using behavioral traces of over 270,000 malware samples and 837 samples of benign software. The behavioral traces were obtained using a modified version of Cuckoo sandbox, that was able to harvest behavioral traces of the analyzed samples in a time-efficient manner. The proposed system achieves high malware detection rate and promising predictive performance in the family classification, opening the possibility of coping with the use of obfuscation and the growing number of malware.",
"title": ""
}
] |
scidocsrr
|
124ec33acc1bd844942557295ba6576f
|
Statistical Models for Frame-Semantic Parsing
|
[
{
"docid": "33b2c5abe122a66b73840506aa3b443e",
"text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.",
"title": ""
},
{
"docid": "dcd08522ff90cd634ccdeec9db3929bf",
"text": "This paper contributes a formalization of frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. It finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two featurebased, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results.",
"title": ""
}
] |
[
{
"docid": "d2c8328984a05e56e449d6558b1e73fb",
"text": "MOTIVATION\nThe development of chemoinformatics has been hampered by the lack of large, publicly available, comprehensive repositories of molecules, in particular of small molecules. Small molecules play a fundamental role in organic chemistry and biology. They can be used as combinatorial building blocks for chemical synthesis, as molecular probes in chemical genomics and systems biology, and for the screening and discovery of new drugs and other useful compounds.\n\n\nRESULTS\nWe describe ChemDB, a public database of small molecules available on the Web. ChemDB is built using the digital catalogs of over a hundred vendors and other public sources and is annotated with information derived from these sources as well as from computational methods, such as predicted solubility and three-dimensional structure. It supports multiple molecular formats and is periodically updated, automatically whenever possible. The current version of the database contains approximately 4.1 million commercially available compounds and 8.2 million counting isomers. The database includes a user-friendly graphical interface, chemical reactions capabilities, as well as unique search capabilities.\n\n\nAVAILABILITY\nDatabase and datasets are available on http://cdb.ics.uci.edu.",
"title": ""
},
{
"docid": "705eb873cffb2a8ccfde63361d1bba96",
"text": "Data Warehouse is a collection of large amount of data which is used by the management for making strategic decisions. The data in a data warehouse is gathered from heterogeneous sources and then populated and queried for carrying out the analysis. The data warehouse design must support the queries for which it is being used for. The design is often an iterative process and must be modified a number of times before any model can be stabilized. The design life cycle of any product includes various stages wherein, testing being the most important one. Data warehouse design has received considerable attention whereas data warehouse testing is being explored now by various researchers. This paper discusses about various categories of testing activities being carried out in a data warehouse at different levels.",
"title": ""
},
{
"docid": "a7187fe4496db8a5ea4a5c550c9167a3",
"text": "We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [17] in several ways. In particular, we introduce a bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs to reduce vertex reaches. Our modifications greatly improve both preprocessing and query times. The resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [28]. However, our algorithm is simpler and combines in a natural way with A search, which yields significantly better query times.",
"title": ""
},
{
"docid": "bab606f99e64c7fd5ce3c04376fbd632",
"text": "Diagnostic reasoning is a key component of many professions. To improve students’ diagnostic reasoning skills, educational psychologists analyse and give feedback on epistemic activities used by these students while diagnosing, in particular, hypothesis generation, evidence generation, evidence evaluation, and drawing conclusions. However, this manual analysis is highly time-consuming. We aim to enable the large-scale adoption of diagnostic reasoning analysis and feedback by automating the epistemic activity identification. We create the first corpus for this task, comprising diagnostic reasoning selfexplanations of students from two domains annotated with epistemic activities. Based on insights from the corpus creation and the task’s characteristics, we discuss three challenges for the automatic identification of epistemic activities using AI methods: the correct identification of epistemic activity spans, the reliable distinction of similar epistemic activities, and the detection of overlapping epistemic activities. We propose a separate performance metric for each challenge and thus provide an evaluation framework for future research. Indeed, our evaluation of various state-of-the-art recurrent neural network architectures reveals that current techniques fail to address some of these challenges.",
"title": ""
},
{
"docid": "52b5fa0494733f2f6b72df0cdfad01f4",
"text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.",
"title": ""
},
{
"docid": "9edfe5895b369c0bab8d83838661ea0a",
"text": "(57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT",
"title": ""
},
{
"docid": "378f33b14b499c65d75a0f83bda17438",
"text": "We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.",
"title": ""
},
{
"docid": "0ff27e119ec045674b9111bb5a9e5d29",
"text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.",
"title": ""
},
{
"docid": "8ab9f1be0a8ed182137c9a8a9c9e71d0",
"text": "PURPOSE OF REVIEW\nTo document recent evidence regarding the role of nutrition as an intervention for sarcopenia.\n\n\nRECENT FINDINGS\nA review of seven randomized controlled trials (RCTs) on beta-hydroxy-beta-methylbutyrate (HMB) alone on muscle loss in 147 adults showed greater muscle mass gain in the intervention group, but no benefit in muscle strength and physical performance measures. Three other review articles examined nutrition and exercise as combined intervention, and suggest enhancement of benefits of exercise by nutrition supplements (energy, protein, vitamin D). Four trials reported on nutrition alone as intervention, mainly consisting of whey protein, leucine, HMB and vitamin D, with variable results on muscle mass and function. Four trials examined the combined effects of nutrition combined with exercise, showing improvements in muscle mass and function.\n\n\nSUMMARY\nTo date, evidence suggests that nutrition intervention alone does have benefit, and certainly enhances the impact of exercise. Nutrients include high-quality protein, leucine, HMB and vitamin D. Long-lasting impact may depend on baseline nutritional status, baseline severity of sarcopenia, and long-lasting adherence to the intervention regime. Future large-scale multicentered RCTs using standardized protocols may provide evidence for formulating guidelines on nutritional intervention for sarcopenia. There is a paucity of data for nursing home populations.",
"title": ""
},
{
"docid": "7fef9bfd0e71a08d5574affb91d0c9ed",
"text": "This paper presents a novel 3D indoor Laser-aided Inertial Navigation System (L-INS) for the visually impaired. An Extended Kalman Filter (EKF) fuses information from an Inertial Measurement Unit (IMU) and a 2D laser scanner, to concurrently estimate the six degree-of-freedom (d.o.f.) position and orientation (pose) of the person and a 3D map of the environment. The IMU measurements are integrated to obtain pose estimates, which are subsequently corrected using line-to-plane correspondences between linear segments in the laser-scan data and orthogonal structural planes of the building. Exploiting the orthogonal building planes ensures fast and efficient initialization and estimation of the map features while providing human-interpretable layout of the environment. The L-INS is experimentally validated by a person traversing a multistory building, and the results demonstrate the reliability and accuracy of the proposed method for indoor localization and mapping.",
"title": ""
},
{
"docid": "137449952a30730185552ed6fca4d8ba",
"text": "BACKGROUND\nPoor sleep quality and depression negatively impact the health-related quality of life of patients with type 2 diabetes, but the combined effect of the two factors is unknown. This study aimed to assess the interactive effects of poor sleep quality and depression on the quality of life in patients with type 2 diabetes.\n\n\nMETHODS\nPatients with type 2 diabetes (n = 944) completed the Diabetes Specificity Quality of Life scale (DSQL) and questionnaires on sleep quality and depression. The products of poor sleep quality and depression were added to the logistic regression model to evaluate their multiplicative interactions, which were expressed as the relative excess risk of interaction (RERI), the attributable proportion (AP) of interaction, and the synergy index (S).\n\n\nRESULTS\nPoor sleep quality and depressive symptoms both increased DSQL scores. The co-presence of poor sleep quality and depressive symptoms significantly reduced DSQL scores by a factor of 3.96 on biological interaction measures. The relative excess risk of interaction was 1.08. The combined effect of poor sleep quality and depressive symptoms was observed only in women.\n\n\nCONCLUSIONS\nPatients with both depressive symptoms and poor sleep quality are at an increased risk of reduction in diabetes-related quality of life, and this risk is particularly high for women due to the interaction effect. Clinicians should screen for and treat sleep difficulties and depressive symptoms in patients with type 2 diabetes.",
"title": ""
},
{
"docid": "342a0f651fcced29849319eda07bd43c",
"text": "To test web applications, developers currently write test cases in frameworks such as Selenium. On the other hand, most web test generation techniques rely on a crawler to explore the dynamic states of the application. The first approach requires much manual effort, but benefits from the domain knowledge of the developer writing the test cases. The second one is automated and systematic, but lacks the domain knowledge required to be as effective. We believe combining the two can be advantageous. In this paper, we propose to (1) mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, (2) combine that inferred knowledge with the power of automated crawling, and (3) extend the test suite for uncovered/unchecked portions of the web application under test. Our approach is implemented in a tool called Testilizer. An evaluation of our approach indicates that Testilizer (1) outperforms a random test generator, and (2) on average, can generate test suites with improvements of up to 150% in fault detection rate and up to 30% in code coverage, compared to the original test suite.",
"title": ""
},
{
"docid": "04d955d4a65c491a1e414bc21f5ac0d0",
"text": "The mining process in blockchain requires solving a proof-of-work puzzle, which is resource expensive to implement in mobile devices due to the high computing power and energy needed. In this paper, we, for the first time, consider edge computing as an enabler for mobile blockchain. In particular, we study edge computing resource management and pricing to support mobile blockchain applications in which the mining process of miners can be offloaded to an edge computing service provider. We formulate a two-stage Stackelberg game to jointly maximize the profit of the edge computing service provider and the individual utilities of the miners. In the first stage, the service provider sets the price of edge computing nodes. In the second stage, the miners decide on the service demand to purchase based on the observed prices. We apply the backward induction to analyze the sub-game perfect equilibrium in each stage for both uniform and discriminatory pricing schemes. For the uniform pricing where the same price is applied to all miners, the existence and uniqueness of Stackelberg equilibrium are validated by identifying the best response strategies of the miners. For the discriminatory pricing where the different prices are applied to different miners, the Stackelberg equilibrium is proved to exist and be unique by capitalizing on the Variational Inequality theory. Further, the real experimental results are employed to justify our proposed model.",
"title": ""
},
{
"docid": "7b6d68ef91e61a701380bfcb2d859771",
"text": "This review provides an overview of how women adjust emotionally to the various phases of IVF treatment in terms of anxiety, depression or general distress before, during and after different treatment cycles. A systematic scrutiny of the literature yielded 706 articles that paid attention to emotional aspects of IVF treatment of which 27 investigated the women's emotional adjustment with standardized measures in relation to norm or control groups. Most studies involved concurrent comparisons between women in different treatment phases and different types of control groups. The findings indicated that women starting IVF were only slightly different emotionally from the norm groups. Unsuccessful treatment raised the women's levels of negative emotions, which continued after consecutive unsuccessful cycles. In general, most women proved to adjust well to unsuccessful IVF, although a considerable group showed subclinical emotional problems. When IVF resulted in pregnancy, the negative emotions disappeared, indicating that treatment-induced stress is considerably related to threats of failure. The concurrent research reviewed, should now be underpinned by longitudinal studies to provide more information about women's long-term emotional adjustment to unsuccessful IVF and about indicators of risk factors for problematic emotional adjustment after unsuccessful treatment, to foster focused psychological support for women at risk.",
"title": ""
},
{
"docid": "5d4b74e38e6ab2ebef7ed7f3cc53773a",
"text": "Recent incidents of unauthorized computer intrusion have brought about discussion of the ethics of breaking into computers. Some individuals have argued that as long as no significant damage results, break-ins may serve a useful purpose. Others counter with the expression that the break-ins are almost always harmful and wrong. This article lists and refutes many of the reasons given to justify computer intrusions. It is the author’s contention that break-ins are ethical only in extreme situations, such as a lifecritical emergency. The article also discusses why no break-in is “harmless.”",
"title": ""
},
{
"docid": "0cb944545afbd19d1441433c621a6d66",
"text": "In this paper, we propose a fine-grained image categorization system with easy deployment. We do not use any object/part annotation (weakly supervised) in the training or in the testing stage, but only class labels for training images. Fine-grained image categorization aims to classify objects with only subtle distinctions (e.g., two breeds of dogs that look alike). Most existing works heavily rely on object/part detectors to build the correspondence between object parts, which require accurate object or object part annotations at least for training images. The need for expensive object annotations prevents the wide usage of these methods. Instead, we propose to generate multi-scale part proposals from object proposals, select useful part proposals, and use them to compute a global image representation for categorization. This is specially designed for the weakly supervised fine-grained categorization task, because useful parts have been shown to play a critical role in existing annotation-dependent works, but accurate part detectors are hard to acquire. With the proposed image representation, we can further detect and visualize the key (most discriminative) parts in objects of different classes. In the experiments, the proposed weakly supervised method achieves comparable or better accuracy than the state-of-the-art weakly supervised methods and most existing annotation-dependent methods on three challenging datasets. Its success suggests that it is not always necessary to learn expensive object/part detectors in fine-grained image categorization.",
"title": ""
},
{
"docid": "03bbb04b76ab4b8bcc794739f7bd854c",
"text": "In this paper, we propose TunnelSlice, which enables natural acquisition of subspace in an augmented scene from an egocentric view, even for scenarios involving ambiguous center objects or object occlusion. In wearable augmented reality (AR), approaching a three-dimensional (3-D) region including the objects of interest has become more important than approaching distant objects one by one. However, existing ray-based volumetric selection through a head worn display accompanies difficulties in defining a desired 3-D region due to obstacles by occlusion and depth perception. The proposed TunnelSlice effectively determines a cuboid transform, excluding unnecessary areas of a user-defined tunnel via two-handed pinch-based procedural slicing from an egocentric view. Through six scenarios involving central object status and different occlusion levels, we conducted a user study of TunnelSlice. Compared with two existing approaches, TunnelSlice was preferred by the subjects and showed greater stability for all scenarios, and outperformed the other approaches in a scenario involving strong occlusion without a central object. TunnelSlice is thus expected to serve as a key technology for spatial protocol and interaction using a subspace in wearable AR.",
"title": ""
},
{
"docid": "5d934dd45e812336ad12cee90d1e8cdf",
"text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a2d7d418aa7710cc3b063a62b8490d6d",
"text": "This paper proposes a robot balanced on a ball. In contrast to an inverted pendulum with two wheels, such as the Segway Human Transporter, an inverted pendulum using a ball can traverse in any direction without changing its orientation, thereby enabling stable motion. Such robots can be used in place of the two-wheeled robots. The robot proposed in this paper is equipped with three omnidirectional wheels with stepping motors that drive the ball and two sets of rate gyroscopes and accelerometers as attitude sensors. The robot has a simple design; it is controlled with a 16-bit microcontroller and runs on Ni-MH batteries. It can not only stand still but also traverse on floor and pivot around its vertical axis. Inverted pendulum control is applied in two axes for attitude control, and commanded motions are converted into velocity commands for the three wheels. The mechanism, control method, and experimental results are described in this paper.",
"title": ""
}
] |
scidocsrr
|
c5bcc8caa8c82e5f5f99c17145d1c5fd
|
Securing Virtual Machines from Anomalies Using Program-Behavior Analysis in Cloud Environment
|
[
{
"docid": "b6e67047ac710fa619c809839412231c",
"text": "An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor's hardware level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a fresh look at the VMI design space. This paper organizes previous work based on the essential design considerations when building a VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been under-explored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest for meaningful trust in virtualized environments through VM introspection.",
"title": ""
}
] |
[
{
"docid": "71da47c6837022a80dccabb0a1f5c00e",
"text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.",
"title": ""
},
{
"docid": "4f58d355a60eb61b1c2ee71a457cf5fe",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
},
{
"docid": "3b2c99d14b2284901e98c70091ef089c",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4",
"title": ""
},
{
"docid": "6ac55a914a3159d42bdb11618ebf54c2",
"text": "BACKGROUND\nChildhood obesity is becoming more common as Malaysia experiences rapid nutrition transition. Current evidence related to parental influences on child dietary intake and body weight status is limited. The present study aimed to report, among Malay families, the prevalence of energy mis-reporting and dietary relationships within family dyads.\n\n\nMETHODS\nThe cross-sectional Family Diet Study (n = 236) was conducted at five primary schools in central of Peninsular Malaysia. Each family consisted of a Malay child, aged 8-12 years, and their main caregiver(s). Information on socio-demographics, dietary intake and anthropometry were collected. Correlations and regression analyses were used to assess dietary relationships within family dyads.\n\n\nRESULTS\nApproximately 29.6% of the children and 75.0% parents were categorised as being overweight or obese. Intakes of nutrients and food groups were below the national recommended targets for majority of children and adults. A large proportion of energy intake mis-reporters were identified: mothers (55.5%), fathers (40.2%) and children (40.2%). Children's body mass index (BMI) was positively associated with parental BMI (fathers, r = 0.37; mothers, r = 0.34; P < 0.01). For dietary intakes, moderate-to-strong (0.35-0.72) and weak-to-moderate (0.16-0.35) correlations were found between mother-father and child-parent dyads, respectively. Multiple regression revealed that maternal percentage energy from fat (β = 0.09, P < 0.01) explained 81% of the variation in children's fat intake.\n\n\nCONCLUSIONS\nClear parental dietary relationships, especially child-mother dyads, were found. Despite a significant proportion of families with members who were overweight or obese, the majority reported dietary intakes below recommended levels, distorted by energy mis-reporting. The findings of the present study can inform interventions targeting parent-child relationships to improve family dietary patterns in Malaysia.",
"title": ""
},
{
"docid": "deccbb39b92e01611de6d0749f550726",
"text": "As product prices become increasingly available on the World Wide Web, consumers attempt to understand how corporations vary these prices over time. However, corporations change prices based on proprietary algorithms and hidden variables (e.g., the number of unsold seats on a flight). Is it possible to develop data mining techniques that will enable consumers to predict price changes under these conditions?This paper reports on a pilot study in the domain of airline ticket prices where we recorded over 12,000 price observations over a 41 day period. When trained on this data, Hamlet --- our multi-strategy data mining algorithm --- generated a predictive model that saved 341 simulated passengers $198,074 by advising them when to buy and when to postpone ticket purchases. Remarkably, a clairvoyant algorithm with complete knowledge of future prices could save at most $320,572 in our simulation, thus HAMLET's savings were 61.8% of optimal. The algorithm's savings of $198,074 represents an average savings of 23.8% for the 341 passengers for whom savings are possible. Overall, HAMLET saved 4.4% of the ticket price averaged over the entire set of 4,488 simulated passengers. Our pilot study suggests that mining of price data available over the web has the potential to save consumers substantial sums of money per annum.",
"title": ""
},
{
"docid": "7b552767a37a7d63591471195b2e002b",
"text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.",
"title": ""
},
{
"docid": "013e96c212f7f58698acdae0adfcf374",
"text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.",
"title": ""
},
{
"docid": "7c82a4aa866d57dd6f592d848f727cff",
"text": "A novel printed diversity monopole antenna is presented for WiFi/WiMAX applications. The antenna comprises two crescent shaped radiators placed symmetrically with respect to a defected ground plane and a neutralization lines is connected between them to achieve good impedance matching and low mutual coupling. Theoretical and experimental characteristics are illustrated for this antenna, which achieves an impedance bandwidth of 54.5% (over 2.4-4.2 GHz), with a reflection coefficient <;-10 dB and mutual coupling <;-17 dB. An acceptable agreement is obtained for the computed and measured gain, radiation patterns, envelope correlation coefficient, and channel capacity loss. These characteristics demonstrate that the proposed antenna is an attractive candidate for multiple-input multiple-output portable or mobile devices.",
"title": ""
},
{
"docid": "cbc04fde0873e0aff630388ee63b53bd",
"text": "Recent works in speech recognition rely either on connectionist temporal classification (CTC) or sequence-to-sequence models for character-level recognition. CTC assumes conditional independence of individual characters, whereas attention-based models can provide nonsequential alignments. Therefore, we could use a CTC loss in combination with an attention-based model in order to force monotonic alignments and at the same time get rid of the conditional independence assumption. In this paper, we use the recently proposed hybrid CTC/attention architecture for audio-visual recognition of speech in-the-wild. To the best of our knowledge, this is the first time that such a hybrid architecture architecture is used for audio-visual recognition of speech. We use the LRS2 database and show that the proposed audio-visual model leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the new state-of-the-art performance on LRS2 database (7% word error rate). We also observe that the audio-visual model significantly outperforms the audio-based model (up to 32.9% absolute improvement in word error rate) for several different types of noise as the signal-to-noise ratio decreases.",
"title": ""
},
{
"docid": "4507c71798a856be64381d7098f30bf4",
"text": "Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbationbased framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an \"adversarial-example generator\". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.",
"title": ""
},
{
"docid": "8e8dd48e3655009e74387c7bf0513b6d",
"text": "We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt algorithms designed for one problem to the other. For both problems, we give new algorithms and explain their potential advantages over existing methods. These algorithms are iterative and can be divided into two types based on whether the parameters are updated sequentially (one at a time) or in parallel (all at once). We also describe a parameterized family of algorithms that includes both a sequential- and a parallel-update algorithm as special cases, thus showing how the sequential and parallel approaches can themselves be unified. For all of the algorithms, we give convergence proofs using a general formalization of the auxiliary-function proof technique. As one of our sequential-update algorithms is equivalent to AdaBoost, this provides the first general proof of convergence for AdaBoost. We show that all of our algorithms generalize easily to the multiclass case, and we contrast the new algorithms with the iterative scaling algorithm. We conclude with a few experimental results with synthetic data that highlight the behavior of the old and newly proposed algorithms in different settings.",
"title": ""
},
{
"docid": "f1cbd60e1bd721e185bbbd12c133ad91",
"text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.",
"title": ""
},
{
"docid": "f8bc67d88bdd9409e2f3dfdc89f6d93c",
"text": "A millimeter-wave CMOS on-chip stacked Marchand balun is presented in this paper. The balun is fabricated using a top pad metal layer as the single-ended port and is stacked above two metal conductors at the next highest metal layer in order to achieve sufficient coupling to function as the differential ports. Strip metal shields are placed underneath the structure to reduce substrate losses. An amplitude imbalance of 0.5 dB is measured with attenuations below 6.5 dB at the differential output ports at 30 GHz. The corresponding phase imbalance is below 5 degrees. The area occupied is 229μm × 229μm.",
"title": ""
},
{
"docid": "65685bafe88b596530d4280e7e75d1c4",
"text": "The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a low-rank change to A (an update/downdate, Ā = A ± WWT). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLAS-based) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package which forms the basis of x = A\\b MATLAB when A is sparse and symmetric positive definite.",
"title": ""
},
{
"docid": "9e804b49534bedcde2611d70c40b255d",
"text": "PURPOSE\nScreening tool of older people's prescriptions (STOPP) and screening tool to alert to right treatment (START) criteria were first published in 2008. Due to an expanding therapeutics evidence base, updating of the criteria was required.\n\n\nMETHODS\nWe reviewed the 2008 STOPP/START criteria to add new evidence-based criteria and remove any obsolete criteria. A thorough literature review was performed to reassess the evidence base of the 2008 criteria and the proposed new criteria. Nineteen experts from 13 European countries reviewed a new draft of STOPP & START criteria including proposed new criteria. These experts were also asked to propose additional criteria they considered important to include in the revised STOPP & START criteria and to highlight any criteria from the 2008 list they considered less important or lacking an evidence base. The revised list of criteria was then validated using the Delphi consensus methodology.\n\n\nRESULTS\nThe expert panel agreed a final list of 114 criteria after two Delphi validation rounds, i.e. 80 STOPP criteria and 34 START criteria. This represents an overall 31% increase in STOPP/START criteria compared with version 1. Several new STOPP categories were created in version 2, namely antiplatelet/anticoagulant drugs, drugs affecting, or affected by, renal function and drugs that increase anticholinergic burden; new START categories include urogenital system drugs, analgesics and vaccines.\n\n\nCONCLUSION\nSTOPP/START version 2 criteria have been expanded and updated for the purpose of minimizing inappropriate prescribing in older people. These criteria are based on an up-to-date literature review and consensus validation among a European panel of experts.",
"title": ""
},
{
"docid": "4d857311f86baca70700bb78c8771f22",
"text": "Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis---and beyond---and describe recent developments towards automated parameter synthesis.",
"title": ""
},
{
"docid": "36911701bcf6029eb796bac182e5aa4c",
"text": "In this paper, we describe the approaches taken in the 4WARD project to address the challenges of the network of the future. Our main hypothesis is that the Future Internet must allow for the fast creation of diverse network designs and paradigms, and must also support their co-existence at run-time. We observe that a pure evolutionary path from the current Internet design will not be able to address, in a satisfactory manner, major issues like the handling of mobile users, information access and delivery, wide area sensor network applications, high management complexity, and malicious traffic that hamper network performance already today. Moreover, the Internetpsilas focus on interconnecting hosts and delivering bits has to be replaced by a more holistic vision of a network of information and content. This is a natural evolution of scope requiring nonetheless a re-design of the architecture. We describe how 4WARD directs research on network virtualisation, novel InNetworkManagement, a generic path concept, and an information centric approach, into a single framework for a diversified, but interoperable, network of the future.",
"title": ""
},
{
"docid": "6b031cd4a5b64cbb16c13ecc3c11b034",
"text": "We present a model of the effects of legal protection of minority shareholders and of cash-f low ownership by a controlling shareholder on the valuation of firms. We then test this model using a sample of 539 large firms from 27 wealthy economies. Consistent with the model, we find evidence of higher valuation of firms in countries with better protection of minority shareholders and in firms with higher cashf low ownership by the controlling shareholder. RECENT RESEARCH SUGGESTS THAT THE EXTENT of legal protection of investors in a country is an important determinant of the development of its financial markets. Where laws are protective of outside investors and well enforced, investors are willing to finance firms, and financial markets are both broader and more valuable. In contrast, where laws are unprotective of investors, the development of financial markets is stunted. Moreover, systematic differences among countries in the structure of laws and their enforcement, such as the historical origin of their laws, account for the differences in financial development ~La Porta et al. ~1997, 1998!!. How does better protection of outside investors ~both shareholders and creditors! promote financial market development? When their rights are better protected by the law, outside investors are willing to pay more for financial assets such as equity and debt. They pay more because they recognize that, with better legal protection, more of the firm’s profits would come back to them as interest or dividends as opposed to being expropriated by the entrepreneur who controls the firm. By limiting expropriation, the law raises the price that securities fetch in the marketplace. In turn, this enables more entrepreneurs to finance their investments externally, leading to the expansion of financial markets. Although the ultimate benefit of legal investor protection for financial development has now been well documented, the effect of protection on valuation has received less attention. In this paper, we present a theoretical and empirical analysis of this effect. * La Porta and Shleifer are from Harvard University, Lopez-de-Silanes from Yale University, and Vishny from the University of Chicago. We thank Altan Sert and Ekaterina Trizlova for research assistance, Malcolm Baker, Simeon Djankov, Edward Glaeser, Simon Johnson, René Stulz, Daniel Wolfenzon, Jeff Wurgler, Luigi Zingales, and three anonymous referees for comments, the NSF for support of this research. THE JOURNAL OF FINANCE • VOL. LVII, NO. 3 • JUNE 2002",
"title": ""
},
{
"docid": "e0a08bac6769382c3168922bdee1939d",
"text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.",
"title": ""
},
{
"docid": "3bca1dd8dc1326693f5ebbe0eaf10183",
"text": "This paper presents a novel multi-way multi-stage power divider design method based on the theory of small reflections. Firstly, the application of the theory of small reflections is extended from transmission line to microwave network. Secondly, an explicit closed-form analytical formula of the input reflection coefficient, which consists of the scattering parameters of power divider elements and the lengths of interconnection lines between each element, is derived. Thirdly, the proposed formula is applied to determine the lengths of interconnection lines. A prototype of a 16-way 4-stage power divider working at 4 GHz is designed and fabricated. Both the simulation and measurement results demonstrate the validity of the proposed method.",
"title": ""
}
] |
scidocsrr
|
3a5b6b1ff0b5a75cae386444de6e9a42
|
Analysis of a Wideband Circularly Polarized Cylindrical Dielectric Resonator Antenna With Broadside Radiation Coupled With Simple Microstrip Feeding
|
[
{
"docid": "f767e0a9711522b06b8d023453f42f3a",
"text": "A novel low-cost method for generating circular polarization in a dielectric resonator antenna is proposed. The antenna comprises four rectangular dielectric layers, each one being rotated by an angle of 30 ° relative to its adjacent layers. Utilizing such an approach has provided a circular polarization over a bandwidth of 6% from 9.55 to 10.15 GHz. This has been achieved in conjunction with a 21% impedance-matching bandwidth over the same frequency range. Also, the radiation efficiency of the proposed circularly polarized dielectric resonator antenna is 93% in this frequency band of operation",
"title": ""
},
{
"docid": "69d7ec6fe0f847cebe3d1d0ae721c950",
"text": "Circularly polarized (CP) dielectric resonator antenna (DRA) subarrays have been numerically studied and experimentally verified. Elliptical CP DRA is used as the antenna element, which is excited by either a narrow slot or a probe. The elements are arranged in a 2 by 2 subarray configuration and are excited sequentially. In order to optimize the CP bandwidth, wideband feeding networks have been designed. Three different types of feeding network are studied; they are parallel feeding network, series feeding network and hybrid ring feeding network. For the CP DRA subarray with hybrid ring feeding network, the impedance matching bandwidth (S11<-10 dB) and 3-dB AR bandwidth achieved are 44% and 26% respectively",
"title": ""
}
] |
[
{
"docid": "cf30e30d7683fd2b0dec2bd6cc354620",
"text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.",
"title": ""
},
{
"docid": "5921f0049596d52bd3aea33e4537d026",
"text": "Various lines of evidence indicate that men generally experience greater sexual arousal (SA) to erotic stimuli than women. Yet, little is known regarding the neurobiological processes underlying such a gender difference. To investigate this issue, functional magnetic resonance imaging was used to compare the neural correlates of SA in 20 male and 20 female subjects. Brain activity was measured while male and female subjects were viewing erotic film excerpts. Results showed that the level of perceived SA was significantly higher in male than in female subjects. When compared to viewing emotionally neutral film excerpts, viewing erotic film excerpts was associated, for both genders, with bilateral blood oxygen level dependent (BOLD) signal increases in the anterior cingulate, medial prefrontal, orbitofrontal, insular, and occipitotemporal cortices, as well as in the amygdala and the ventral striatum. Only for the group of male subjects was there evidence of a significant activation of the thalamus and hypothalamus, a sexually dimorphic area of the brain known to play a pivotal role in physiological arousal and sexual behavior. When directly compared between genders, hypothalamic activation was found to be significantly greater in male subjects. Furthermore, for male subjects only, the magnitude of hypothalamic activation was positively correlated with reported levels of SA. These findings reveal the existence of similarities and dissimilarities in the way the brain of both genders responds to erotic stimuli. They further suggest that the greater SA generally experienced by men, when viewing erotica, may be related to the functional gender difference found here with respect to the hypothalamus.",
"title": ""
},
{
"docid": "27a8fa8b0daba64d98a72814d71436a5",
"text": "n engl j med 371;21 nejm.org November 20, 2014 1972 From the Department of Medicine, NUHospital Organization, Uddevalla (M.L., S.D.), Department of Molecular and Clinical Medicine, Institute of Medicine, University of Gothenburg (M.L., S.G., A.R.), Center of Registers in Region Västra Götaland (A.-M.S.), Statistiska Konsultgruppen (A.P.), Nordic School of Public Health (H.W.), and Sahlgrenska University Hospital (A.R.), Gothenburg — all in Sweden; Saint Luke’s Mid America Heart Institute (M.K.), University of Missouri– Kansas City School of Medicine (M.K., M.C.), and Children’s Mercy Hospital (M.C.), Kansas City, MO; and the University of Kansas School of Medicine, Kansas City, KS (M.C.). Address reprint requests to Dr. Lind at the Department of Medicine, Uddevalla Hospital, 451 80 Uddevalla, Uddevalla, Sweden, or at lind . marcus@ telia . com.",
"title": ""
},
{
"docid": "26aecc52cd3e4eaec05011333a9a7814",
"text": "This paper introduces the concept of letting an RDBMS Optimizer optimize its own environment. In our project, we have used the DB2 Optimizer to tackle the index selection problem, a variation of the knapack problem. This paper will discuss our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes.",
"title": ""
},
{
"docid": "c4363d156be5aecc6034012a0fad9462",
"text": "Previous approaches for scene text detection usually rely on manually defined sliding windows. In this paper, an intuitive regionbased method is presented to detect multi-oriented text without any prior knowledge regarding the textual shape. We first introduce a Cornerbased Region Proposal Network (CRPN) that employs corners to estimate the possible locations of text instances instead of shifting a set of default anchors. The proposals generated by CRPN are geometry adaptive, which makes our method robust to various text aspect ratios and orientations. Moreover, we design a simple embedded data augmentation module inside the region-wise subnetwork, which not only ensures the model utilizes training data more efficiently, but also learns to find the most representative instance of the input images for training. Experimental results on public benchmarks confirm that the proposed method is capable of achieving comparable performance with the stateof-the-art methods. On the ICDAR 2013 and 2015 datasets, it obtains F-measure of 0.876 and 0.845 respectively. The code is publicly available at https://github.com/xhzdeng/crpn.",
"title": ""
},
{
"docid": "c81e420ce3c6d215cdd0da0213cda47d",
"text": "We show inapproximability results concerning minimization of nondeterministic finite automata (nfa’s) as well as regular expressions relative to given nfa’s, regular expressions or deterministic finite automata (dfa’s). We show that it is impossible to efficiently minimize a given nfa or regular expression with n states, transitions, resp. symbols within the factor o(n), unless P = PSPACE. Our inapproximability results for a given dfa with n states are based on cryptographic assumptions and we show that any efficient algorithm will have an approximation factor of at least n poly(log n) . Our setup also allows us to analyze the minimum consistent dfa problem. Classification: Automata and Formal Languages, Computational Complexity, Approximability",
"title": ""
},
{
"docid": "e494296ace10bb17fa2e4c6170cce351",
"text": "Methamphetamine (MA) is a highly addictive psychostimulant drug that principally affects the monoamine neurotransmitter systems of the brain and results in feelings of alertness, increased energy and euphoria. The drug is particularly popular with young adults, due to its wide availability, relatively low cost, and long duration of psychoactive effects. Extended use of MA is associated with many health problems that are not limited to the central nervous system, and contribute to increased morbidity and mortality in drug users. Numerous studies, using complementary techniques, have provided evidence that chronic MA use is associated with substantial neurotoxicity and cognitive impairment. These pathoeywords:",
"title": ""
},
{
"docid": "c87112a95e41fccd9fc33bedf45e2bb5",
"text": "Smart grid introduces a wealth of promising applications for upcoming fifth-generation mobile networks (5G), enabling households and utility companies to establish a two-way digital communications dialogue, which can benefit both of them. The utility can monitor real-time consumption of end users and take proper measures (e.g., real-time pricing) to shape their consumption profile or to plan enough supply to meet the foreseen demand. On the other hand, a smart home can receive real-time electricity prices and adjust its consumption to minimize its daily electricity expenditure, while meeting the energy need and the satisfaction level of the dwellers. Smart Home applications for smart phones are also a promising use case, where users can remotely control their appliances, while they are away at work or on their ways home. Although these emerging services can evidently boost the efficiency of the market and the satisfaction of the consumers, they may also introduce new attack surfaces making the grid vulnerable to financial losses or even physical damages. In this paper, we propose an architecture to secure smart grid communications incorporating an intrusion detection system, composed of distributed components collaborating with each other to detect price integrity or load alteration attacks in different segments of an advanced metering infrastructure.",
"title": ""
},
{
"docid": "fe18b85af942d35b4e4ec1165e2e63c3",
"text": "The retrofitting of existing buildings to resist the seismic loads is very important to avoid losing lives or financial disasters. The aim at retrofitting processes is increasing total structure strength by increasing stiffness or ductility ratio. In addition, the response modification factors (R) have to satisfy the code requirements for suggested retrofitting types. In this study, two types of jackets are used, i.e. full reinforced concrete jackets and surrounding steel plate jackets. The study is carried out on an existing building in Madinah by performing static pushover analysis before and after retrofitting the columns. The selected model building represents nearly all-typical structure lacks structure built before 30 years ago in Madina City, KSA. The comparison of the results indicates a good enhancement of the structure respect to the applied seismic forces. Also, the response modification factor of the RC building is evaluated for the studied cases before and after retrofitting. The design of all vertical elements (columns) is given. The results show that the design of retrofitted columns satisfied the code's design stress requirements. However, for some retrofitting types, the ductility requirements represented by response modification factor do not satisfy KSA design code (SBC301). Keywords—Concrete jackets, steel jackets, RC buildings pushover analysis, non-linear analysis.",
"title": ""
},
{
"docid": "0bcb2fdf59b88fca5760bfe456d74116",
"text": "A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised adaptive metric learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "b61985ecdb51982e6e31b19c862f18e2",
"text": "Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges. One main reason is because GPS has limited precision in indoor environments. The additional fact that MAVs are not able to carry heavy weight or power consuming sensors, such as range finders, makes indoor autonomous navigation a challenging task. In this paper, we propose a practical system in which a quadcopter autonomously navigates indoors and finds a specific target, i.e. a book bag, by using a single camera. A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot’s choice of action. We show our system’s performance through real-time experiments in diverse indoor locations. To understand more about our trained network, we use several visualization techniques.",
"title": ""
},
{
"docid": "6dce88afec3456be343c6a477350aa49",
"text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).",
"title": ""
},
{
"docid": "2117e3c0cf7854c8878417b7d84491ce",
"text": "We designed a new annotation scheme for formalising relation structures in research papers, through the investigation of computer science papers. The annotation scheme is based on the hypothesis that identifying the role of entities and events that are described in a paper is useful for intelligent information retrieval in academic literature, and the role can be determined by the relationship between the author and the described entities or events, and relationships among them. Using the scheme, we have annotated research abstracts from the IPSJ Journal published in Japanese by the Information Processing Society of Japan. On the basis of the annotated corpus, we have developed a prototype information extraction system which has the facility to classify sentences according to the relationship between entities mentioned, to help find the role of the entity in which the searcher is interested.",
"title": ""
},
{
"docid": "7527cfe075027c9356645419c4fd1094",
"text": "ive Multi-Document Summarization via Phrase Selection and Merging∗ Lidong Bing§ Piji Li Yi Liao Wai Lam Weiwei Guo† Rebecca J. Passonneau‡ §Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA USA Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong †Yahoo Labs, Sunnyvale, CA, USA ‡Center for Computational Learning Systems, Columbia University, New York, NY, USA §lbing@cs.cmu.edu, {pjli, yliao, wlam}@se.cuhk.edu.hk †wguo@yahoo-inc.com, ‡becky@ccls.columbia.edu",
"title": ""
},
{
"docid": "89af4054eb70309acab13bdb283bde3b",
"text": "How to model distribution of sequential data, including but not limited to speech and human motions, is an important ongoing research problem. It has been demonstrated that model capacity can be significantly enhanced by introducing stochastic latent variables in the hidden states of recurrent neural networks. Simultaneously, WaveNet, equipped with dilated convolutions, achieves astonishing empirical performance in natural speech generation task. In this paper, we combine the ideas from both stochastic latent variables and dilated convolutions, and propose a new architecture to model sequential data, termed as Stochastic WaveNet, where stochastic latent variables are injected into the WaveNet structure. We argue that Stochastic WaveNet enjoys powerful distribution modeling capacity and the advantage of parallel training from dilated convolutions. In order to efficiently infer the posterior distribution of the latent variables, a novel inference network structure is designed based on the characteristics of WaveNet architecture. State-of-the-art performances on benchmark datasets are obtained by Stochastic WaveNet on natural speech modeling and high quality human handwriting samples can be generated as well.",
"title": ""
},
{
"docid": "52a01a3bb4122e313c3146363b3fb954",
"text": "We demonstrate how movements of multiple people or objects within a building can be displayed on a network representation of the building, where nodes are rooms and edges are doors. Our representation shows the direction of movements between rooms and the order in which rooms are visited, while avoiding occlusion or overplotting when there are repeated visits or multiple moving people or objects. We further propose the use of a hybrid visualization that mixes geospatial and topological (network-based) representations, enabling focus-in-context and multi-focal visualizations. An experimental comparison found that the topological representation was significantly faster than the purely geospatial representation for three out of four tasks.",
"title": ""
},
{
"docid": "ab05c141b9d334f488cfb08ad9ed2137",
"text": "Cellular communications are undergoing significant evolutions in order to accommodate the load generated by increasingly pervasive smart mobile devices. Dynamic access network adaptation to customers' demands is one of the most promising paths taken by network operators. To that end, one must be able to process large amount of mobile traffic data and outline the network utilization in an automated manner. In this paper, we propose a framework to analyze broad sets of Call Detail Records (CDRs) so as to define categories of mobile call profiles and classify network usages accordingly. We evaluate our framework on a CDR dataset including more than 300 million calls recorded in an urban area over 5 months. We show how our approach allows to classify similar network usage profiles and to tell apart normal and outlying call behaviors.",
"title": ""
},
{
"docid": "ad059332e36849857c9bf1a52d5b0255",
"text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.",
"title": ""
},
{
"docid": "3c6fad424d710325a64e51f22c1abb65",
"text": "The performance of neural networks, especially the currently popular form of deep 1 neural networks, is often limited by the underlying hardware. Computations in 2 deep neural networks are expensive, have large memory footprint, and are power 3 hungry. Conventional reduced-precision numerical formats, such as fixed-point 4 and floating point, cannot accurately represent deep neural network parameters 5 with a nonlinear distribution and small dynamic range. Recently proposed posit 6 numerical format with tapered precision represents small values more accurately 7 than the other formats. In this work, we propose an ultra-low precision deep neural 8 network, PositNN, that uses posits during inference. The efficacy of PositNN is 9 demonstrated on a deep neural network architecture with two datasets (MNIST 10 and Fashion MNIST), where an 8-bit PositNN outperforms other {5-8}-bit low11 precision neural networks and a 32-bit floating point baseline network. 12",
"title": ""
},
{
"docid": "d8f58ed573a9a719fde7b1817236cdeb",
"text": "In a remarkably short timeframe, developing apps for smartphones has gone from an arcane curiosity to an essential skill set. Employers are scrambling to find developers capable of transforming their ideas into apps. Educators interested in filling that void are likewise trying to keep up, and face difficult decisions in designing a meaningful course. There are a plethora of development platforms, but two stand out because of their popularity and divergent approaches - Apple's iOS, and Google's Android. In this paper, we will compare the two, and address the question: which should faculty teach?",
"title": ""
}
] |
scidocsrr
|
ca9b76b73525ec2ae6144b049ddb873e
|
A New Lane Line Segmentation and Detection Method based on Inverse Perspective Mapping
|
[
{
"docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a",
"text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.",
"title": ""
},
{
"docid": "261f146b67fd8e13d1ad8c9f6f5a8845",
"text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.",
"title": ""
}
] |
[
{
"docid": "e3ccebbfb328e525c298816950d135a5",
"text": "It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions; whereas collecting positive data can be automated and done safely based on the robot’s own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as ”GoNoGo” costmaps) for robot path planning using visual data only.",
"title": ""
},
{
"docid": "5e5c2619ea525ef77cbdaabb6a21366f",
"text": "Data profiling is an information analysis technique on data stored inside database. Data profiling purpose is to ensure data quality by detecting whether the data in the data source compiles with the established business rules. Profiling could be performed using multiple analysis techniques depending on the data element to be analyzed. The analysis process also influenced by the data profiling tool being used. This paper describes tehniques of profiling analysis using open-source tool OpenRefine. The method used in this paper is case study method, using data retrieved from BPOM Agency website for checking commodity traditional medicine permits. Data attributes that became the main concern of this paper is Nomor Ijin Edar (NIE / distribution permit number) and registrar company name. The result of this research were suggestions to improve data quality on NIE and company name, which consists of data cleansing and improvement to business process and applications.",
"title": ""
},
{
"docid": "d9d68377bb73d7abca39455b49abe8b7",
"text": "A boosting-based method of learning a feed-forward artificial neural network (ANN) with a single layer of hidden neurons and a single output neuron is presented. Initially, an algorithm called Boostron is described that learns a single-layer perceptron using AdaBoost and decision stumps. It is then extended to learn weights of a neural network with a single hidden layer of linear neurons. Finally, a novel method is introduced to incorporate non-linear activation functions in artificial neural network learning. The proposed method uses series representation to approximate non-linearity of activation functions, learns the coefficients of nonlinear terms by AdaBoost. It adapts the network parameters by a layer-wise iterative traversal of neurons and an appropriate reduction of the problem. A detailed performances comparison of various neural network models learned the proposed methods and those learned using the Least Mean Squared learning (LMS) and the resilient back-propagation (RPROP) is provided in this paper. Several favorable results are reported for 17 synthetic and real-world datasets with different degrees of difficulties for both binary and multi-class problems. Email addresses: mubasher.baig@nu.edu.pk, awais@lums.edu.pk (Mirza M. Baig, Mian. M. Awais), alfy@kfupm.edu.sa (El-Sayed M. El-Alfy) Preprint submitted to Neurocomputing March 9, 2017",
"title": ""
},
{
"docid": "9da1449675af42a2fc75ba8259d22525",
"text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud",
"title": ""
},
{
"docid": "a8670bebe828e07111f962d72c5909aa",
"text": "Personalities are general properties of humans and other animals. Different personality traits are phenotypically correlated, and heritabilities of personality traits have been reported in humans and various animals. In great tits, consistent heritable differences have been found in relation to exploration, which is correlated with various other personality traits. In this paper, we investigate whether or not risk-taking behaviour is part of these avian personalities. We found that (i) risk-taking behaviour is repeatable and correlated with exploratory behaviour in wild-caught hand-reared birds, (ii) in a bi-directional selection experiment on 'fast' and 'slow' early exploratory behaviour, bird lines tend to differ in risk-taking behaviour, and (iii) within-nest variation of risk-taking behaviour is smaller than between-nest variation. To show that risk-taking behaviour has a genetic component in a natural bird population, we bred great tits in the laboratory and artificially selected 'high' and 'low' risk-taking behaviour for two generations. Here, we report a realized heritability of 19.3 +/- 3.3% (s.e.m.) for risk-taking behaviour. With these results we show in several ways that risk-taking behaviour is linked to exploratory behaviour, and we therefore have evidence for the existence of avian personalities. Moreover, we prove that there is heritable variation in more than one correlated personality trait in a natural population, which demonstrates the potential for correlated evolution.",
"title": ""
},
{
"docid": "9aa21d2b6ea52e3e1bdd3e2795d1bf03",
"text": "Dining cryptographers networks (or DC-nets) are a privacypreserving primitive devised by Chaum for anonymous message publication. A very attractive feature of the basic DC-net is its non-interactivity. Subsequent to key establishment, players may publish their messages in a single broadcast round, with no player-to-player communication. This feature is not possible in other privacy-preserving tools like mixnets. A drawback to DC-nets, however, is that malicious players can easily jam them, i.e., corrupt or block the transmission of messages from honest parties, and may do so without being traced. Several researchers have proposed valuable methods of detecting cheating players in DC-nets. This is usually at the cost, however, of multiple broadcast rounds, even in the optimistic case, and often of high computational and/or communications overhead, particularly for fault recovery. We present new DC-net constructions that simultaneously achieve noninteractivity and high-probability detection and identification of cheating players. Our proposals are quite efficient, imposing a basic cost that is linear in the number of participating players. Moreover, even in the case of cheating in our proposed system, just one additional broadcast round suffices for full fault recovery. Among other tools, our constructions employ bilinear maps, a recently popular cryptographic technique for reducing communication complexity.",
"title": ""
},
{
"docid": "be8efe56e56bccf1668faa7b9c0a6e57",
"text": "Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.",
"title": ""
},
{
"docid": "133a48a5c6c568d33734bd95d4aec0b2",
"text": "The topic information of conversational content is important for continuation with communication, so topic detection and tracking is one of important research. Due to there are many topic transform occurring frequently in long time communication, and the conversation maybe have many topics, so it's important to detect different topics in conversational content. This paper detects topic information by using agglomerative clustering of utterances and Dynamic Latent Dirichlet Allocation topic model, uses proportion of verb and noun to analyze similarity between utterances and cluster all utterances in conversational content by agglomerative clustering algorithm. The topic structure of conversational content is friability, so we use speech act information and gets the hypernym information by E-HowNet that obtains robustness of word categories. Latent Dirichlet Allocation topic model is used to detect topic in file units, it just can detect only one topic if uses it in conversational content, because of there are many topics in conversational content frequently, and also uses speech act information and hypernym information to train the latent Dirichlet allocation models, then uses trained models to detect different topic information in conversational content. For evaluating the proposed method, support vector machine is developed for comparison. According to the experimental results, we can find the proposed method outperforms the approach based on support vector machine in topic detection and tracking in spoken dialogue.",
"title": ""
},
{
"docid": "09985252933e82cf1615dabcf1e6d9a2",
"text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.",
"title": ""
},
{
"docid": "f55ac9e319ad8b9782a34251007a5d06",
"text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",
"title": ""
},
{
"docid": "b3fc899c49ceb699f62b43bb0808a1b2",
"text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.",
"title": ""
},
{
"docid": "c0b000176bba658ef702872f0174b602",
"text": "Distributed Denial of Service (DDoS) attacks represent a major threat to uninterrupted and efficient Internet service. In this paper, we empirically evaluate several major information metrics, namely, Hartley entropy, Shannon entropy, Renyi’s entropy, generalized entropy, Kullback–Leibler divergence and generalized information distance measure in their ability to detect both low-rate and high-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic data and an appropriate metric facilitates building an effective model to detect both low-rate and high-rate DDoS attacks. We use MIT Lincoln Laboratory, CAIDA and TUIDS DDoS datasets to illustrate the efficiency and effectiveness of each metric for DDoS detection. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a673945eaa9b5a350f7d7421c45ac238",
"text": "The intention of this study was to identify the bacterial pathogens infecting Oreochromis niloticus (Nile tilapia) and Clarias gariepinus (African catfish), and to establish the antibiotic susceptibility of fish bacteria in Uganda. A total of 288 fish samples from 40 fish farms (ponds, cages, and tanks) and 8 wild water sites were aseptically collected and bacteria isolated from the head kidney, liver, brain and spleen. The isolates were identified by their morphological characteristics, conventional biochemical tests and Analytical Profile Index test kits. Antibiotic susceptibility of selected bacteria was determined by the Kirby-Bauer disc diffusion method. The following well-known fish pathogens were identified at a farm prevalence of; Aeromonas hydrophila (43.8%), Aeromonas sobria (20.8%), Edwardsiella tarda (8.3%), Flavobacterium spp. (4.2%) and Streptococcus spp. (6.3%). Other bacteria with varying significance as fish pathogens were also identified including Plesiomonas shigelloides (25.0%), Chryseobacterium indoligenes (12.5%), Pseudomonas fluorescens (10.4%), Pseudomonas aeruginosa (4.2%), Pseudomonas stutzeri (2.1%), Vibrio cholerae (10.4%), Proteus spp. (6.3%), Citrobacter spp. (4.2%), Klebsiella spp. (4.2%) Serratia marcescens (4.2%), Burkholderia cepacia (2.1%), Comamonas testosteroni (8.3%) and Ralstonia picketti (2.1%). Aeromonas spp., Edwardsiella tarda and Streptococcus spp. were commonly isolated from diseased fish. Aeromonas spp. (n = 82) and Plesiomonas shigelloides (n = 73) were evaluated for antibiotic susceptibility. All isolates tested were susceptible to at-least ten (10) of the fourteen antibiotics evaluated. High levels of resistance were however expressed by all isolates to penicillin, oxacillin and ampicillin. This observed resistance is most probably intrinsic to those bacteria, suggesting minimal levels of acquired antibiotic resistance in fish bacteria from the study area. To our knowledge, this is the first study to establish the occurrence of several bacteria species infecting fish; and to determine antibiotic susceptibility of fish bacteria in Uganda. The current study provides baseline information for future reference and fish disease management in the country.",
"title": ""
},
{
"docid": "b3923d263c230f527f06b85275522f60",
"text": "Cloud computing is a relatively new concept that offers the potential to deliver scalable elastic services to many. The notion of pay-per use is attractive and in the current global recession hit economy it offers an economic solution to an organizations’ IT needs. Computer forensics is a relatively new discipline born out of the increasing use of computing and digital storage devices in criminal acts (both traditional and hi-tech). Computer forensic practices have been around for several decades and early applications of their use can be charted back to law enforcement and military investigations some 30 years ago. In the last decade computer forensics has developed in terms of procedures, practices and tool support to serve the law enforcement community. However, it now faces possibly its greatest challenges in dealing with cloud computing. Through this paper we explore these challenges and suggest some possible solutions.",
"title": ""
},
{
"docid": "169ed8d452a7d0dd9ecf90b9d0e4a828",
"text": "Technology is common in the domain of knowledge distribution, but it rarely enhances the process of knowledge use. Distribution delivers knowledge to the potential user's desktop but cannot dictate what he or she does with it thereafter. It would be interesting to envision technologies that help to manage personal knowledge as it applies to decisions and actions. The viewpoints about knowledge vary from individual, community, society, personnel development or national development. Personal Knowledge Management (PKM) integrates Personal Information Management (PIM), focused on individual skills, with Knowledge Management (KM). KM Software is a subset of Enterprise content management software and which contains a range of software that specialises in the way information is collected, stored and/or accessed. This article focuses on KM skills, PKM and PIM Open Sources Software, Social Personal Management and also highlights the Comparison of knowledge base management software and its use.",
"title": ""
},
{
"docid": "7095bf529a060dd0cd7eeb2910998cf8",
"text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "d6bbec8d1426cacba7f8388231f04add",
"text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.",
"title": ""
},
{
"docid": "6ec0b302a485b787b3d21b89f79a0110",
"text": "This paper draws on primary and secondary data to propose a taxonomy of strategies, or \"schools.\" for knowledge management. The primary purpose of this fratiiework is to guide executives on choices to initiate knowledge tnanagement projects according to goals, organizational character, and technological, behavioral, or economic biases. It may also be useful to teachers in demonstrating the scope of knowledge management and to researchers in generating propositions for further study.",
"title": ""
},
{
"docid": "945bf7690169b5f2e615324fb133bc19",
"text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.",
"title": ""
}
] |
scidocsrr
|
accdad77be6421a27072d5d5d444bc69
|
Deep Learning UI Design Patterns of Mobile Apps
|
[
{
"docid": "3c5feaacf73220d2dec8e319bc0ad929",
"text": "Design plays an important role in adoption of apps. App design, however, is a complex process with multiple design activities. To enable data-driven app design applications, we present interaction mining -- capturing both static (UI layouts, visual details) and dynamic (user flows, motion details) components of an app's design. We present ERICA, a system that takes a scalable, human-computer approach to interaction mining existing Android apps without the need to modify them in any way. As users interact with apps through ERICA, it detects UI changes, seamlessly records multiple data-streams in the background, and unifies them into a user interaction trace. Using ERICA we collected interaction traces from over a thousand popular Android apps. Leveraging this trace data, we built machine learning classifiers to detect elements and layouts indicative of 23 common user flows. User flows are an important component of UX design and consists of a sequence of UI states that represent semantically meaningful tasks such as searching or composing. With these classifiers, we identified and indexed more than 3000 flow examples, and released the largest online search engine of user flows in Android apps.",
"title": ""
}
] |
[
{
"docid": "9832eb4b5d47267d7b99e87bf853d30e",
"text": "Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired/unpaired image-to-image translation, such as photo→ sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow→sheep, motor→ bicycle, cat→dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics.",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "ff8dec3914e16ae7da8801fe67421760",
"text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.",
"title": ""
},
{
"docid": "b0d959bdb58fbcc5e324a854e9e07b81",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "dc2974cf577934fa0b7cea2a91a057d3",
"text": "Can the sentiment contained in tweets serve as a meaningful proxy to predict match outcomes and if so, can the magnitude of outcomes be predicted based on a degree of sentiment? To answer these questions we constructed the CentralSport system to gather tweets related to the twenty clubs of the English Premier League and analyze their sentiment content, not only to predict match outcomes, but also to use as a wagering decision system. From our analysis, tweet sentiment outperformed wagering on odds-favorites, with higher payout returns (best $2,704.63 versus odds-only $1,887.88) but lower accuracy, a trade-off from non-favorite wagering. This result may suggest a performance degradation that arises from conservatism in the odds-setting process, especially when three match results are possible outcomes. We found that leveraging a positive tweet sentiment surge over club average could net a payout of $3,011.20. Lastly, we found that as the magnitude of positive sentiment between two clubs increased, so too did the point spread; 0.42 goal difference for clubs with a slight positive edge versus 0.90 goal difference for an overwhelming difference in positive sentiment. In both these",
"title": ""
},
{
"docid": "5a06eed96bd877138e1f484b2c771c38",
"text": "This chapter presents an initial “4+1” theory of value-based software engineering (VBSE). The engine in the center is the stakeholder win-win Theory W, which addresses the questions of “which values are important?” and “how is success assured?” for a given software engineering enterprise. The four additional theories that it draws upon are utility theory (how important are the values?), decision theory (how do stakeholders’ values determine decisions?), dependency theory (how do dependencies affect value realization?), and control theory (how to adapt to change and control value realization?). After discussing the motivation and context for developing a VBSE theory and the criteria for a good theory, the chapter discusses how the theories work together into a process for defining, developing, and evolving software-intensive systems. It also illustrates the application of the theory to a supply chain system example, discusses how well the theory meets the criteria for a good theory, and identifies an agenda for further research.",
"title": ""
},
{
"docid": "47c723b0c41fb26ed7caa077388e2e1b",
"text": "Automatic dependent surveillance-broadcast (ADS-B) is the communications protocol currently being rolled out as part of next-generation air transportation systems. As the heart of modern air traffic control, it will play an essential role in the protection of two billion passengers per year, in addition to being crucial to many other interest groups in aviation. The inherent lack of security measures in the ADS-B protocol has long been a topic in both the aviation circles and in the academic community. Due to recently published proof-of-concept attacks, the topic is becoming ever more pressing, particularly with the deadline for mandatory implementation in most airspaces fast approaching. This survey first summarizes the attacks and problems that have been reported in relation to ADS-B security. Thereafter, it surveys both the theoretical and practical efforts that have been previously conducted concerning these issues, including possible countermeasures. In addition, the survey seeks to go beyond the current state of the art and gives a detailed assessment of security measures that have been developed more generally for related wireless networks such as sensor networks and vehicular ad hoc networks, including a taxonomy of all considered approaches.",
"title": ""
},
{
"docid": "49a53a8cb649c93d685e832575acdb28",
"text": "We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "e434b1af70f57b8d92e190834a1a9242",
"text": "Evidence-based reasoning is at the core of many problem solving and decision making tasks in a wide variety of domains. This paper introduces a computational theory of evidence-based reasoning, the architecture of a learning agent shell which incorporates general knowledge for evidence-based reasoning, a methodology that uses the shell to rapidly develop cognitive assistants in a specific domain, and a sample cognitive assistant for intelligence analysis.",
"title": ""
},
{
"docid": "dc9547eb3de2bb805b9473997377feb9",
"text": "A repeated-measures, waiting list control design was used to assess efficacy of a social skills intervention for autistic spectrum children focused on individual and group LEGO play. The intervention combined aspects of behavior therapy, peer modeling and naturalistic communication strategies. Close interaction and joint attention to task play an important role in both group and individual therapy activities. The goal of treatment was to improve social competence (SC) which was construed as reflecting three components: (1) motivation to initiate social contact with peers; (2) ability to sustain interaction with peers for a period of time: and (3) overcoming autistic symptoms of aloofness and rigidity. Measures for the first two variables were based on observation of subjects in unstructured situations with peers; and the third variable was assessed using a structured rating scale, the SI subscale of the GARS. Results revealed significant improvement on all three measures at both 12 and 24 weeks with no evidence of gains during the waiting list period. No gender differences were found on outcome, and age of clients was not correlated with outcome. LEGO play appears to be a particularly effective medium for social skills intervention, and other researchers and clinicians are encouraged to attempt replication of this work, as well as to explore use of LEGO in other methodologies, or with different clinical populations.",
"title": ""
},
{
"docid": "1e1bafd8f06a4f80415b338a949624db",
"text": "Commercial polypropylene pelvic mesh products were characterized in terms of their chemical compositions and molecular weight characteristics before and after implantation. These isotactic polypropylene mesh materials showed clear signs of oxidation by both Fourier-transform infrared spectroscopy and scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDS). The oxidation was accompanied by a decrease in both weight-average and z-average molecular weights and narrowing of the polydispersity index relative to that of the non-implanted material. SEM revealed the formation of transverse cracking of the fibers which generally, but with some exceptions, increased with implantation time. Collectively these results, as well as the loss of flexibility and embrittlement of polypropylene upon implantation as reported by other workers, may only be explained by in vivo oxidative degradation of polypropylene.",
"title": ""
},
{
"docid": "7a2e4588826541a1b6d3a493d7601e0c",
"text": "Sports analytics in general, and football (soccer in USA) analytics in particular, have evolved in recent years in an amazing way, thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game. In this paper we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance. From observational data of football games we extract a set of pass-based performance indicators and summarize them in the H indicator. We observe a strong correlation among the proposed indicator and the success of a team, and therefore perform a simulation on the four major European championships (78 teams, almost 1500 games). The outcome of each game in the championship was replaced by a synthetic outcome (win, loss or draw) based on the performance indicators computed for each team. We found that the final rankings in the simulated championships are very close to the actual rankings in the real championships, and show that teams with high ranking error show extreme values of a defense/attack efficiency measure, the Pezzali score. Our results are surprising given the simplicity of the proposed indicators, suggesting that a complex systems' view on football data has the potential of revealing hidden patterns and behavior of superior quality.",
"title": ""
},
{
"docid": "39838881287fd15b29c20f18b7e1d1eb",
"text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.",
"title": ""
},
{
"docid": "8277f94cff0f5cd28ffbf5e0d6898c2a",
"text": "There is evidence that men experience more sexual arousal than women but also that women in mid-luteal phase experience more sexual arousal than women outside this phase. Recently, a few functional brain imaging studies have tackled the issue of gender differences as pertaining to reactions to erotica. The question of whether or not gender differences in reactions to erotica are maintained with women in different phases has not yet been answered from a functional brain imaging perspective. In order to examine this issue, functional MRI was performed in 22 male and 22 female volunteers. Subjects viewed erotic film excerpts alternating with emotionally neutral excerpts in a standard block-design paradigm. Arousal to erotic stimuli was evaluated using standard rating scales after scanning. Two-sample t-test with uncorrected P<0.001 values for a priori determined region of interests involved in processing of erotic stimuli and with corrected P<0.05 revealed gender differences: Comparing women in mid-luteal phase and during their menses, superior activation was revealed for women in mid-luteal phase in the anterior cingulate, left insula, and orbitofrontal cortex. A superior activation for men was found in the left thalamus, the bilateral amygdala, the anterior cingulate, the bilateral orbitofrontal, bilateral parahippocampal, and insular regions, which were maintained at a corrected P in the amygdala, the insula, and thalamus. There were no areas of significant superior activation for women neither in mid-luteal phase nor during their menses. Our results indicate that there are differences between women in the two cycle times in cerebral activity during viewing of erotic stimuli. Furthermore, gender differences with women in mid-luteal phases are similar to those in females outside the mid-luteal phase.",
"title": ""
},
{
"docid": "37fa5de113e6b3d2353d4e404f5547e8",
"text": "Recently a unified brain theory was proposed (Friston, 2010) attempting to explain action, perception, and learning (Friston, 2010). It is based on a predictive brain with Bayesian updating, and Andy Clark evaluates this approach in “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” If such a theory exists it should incorporate multiple theories applicable to brain science such as evolutionary theory (Calvin, 1987), information theory (Borst and Theunissen, 1999; Friston, 2010), thermodynamics (Kirkaldy, 1965) and also provide us with an advanced model for a better understanding of more philosophical issues such as the so-called free will problem. The free will problem is a philosophical battle between compatibilists and incompatibilists. According to compatibilists like Hobbes, Hume, James, and Dennet, free will is not in danger if determinism is true. Free will is perfectly compatible with a deterministic working of our universe and brain. Incompatibilists disagree but differ about the conclusion to be drawn. Hard incompatibilists such as Spinoza and Laplace conclude that there is no free will because determinism is true, while soft incompatibilists like Reid, Eccles, and Penrose believe that our free will exists because determinism is false. In arguing for indeterminism incompatibilist libertarians often refer to fashionable theories such as quantum mechanics or thermodynamics which apply stochastic, non-linear models in order to describe physical processes. Nowadays these nonlinear models are also applied to brain processes (Ezhov and Khrennikov, 2005), though philosophers still disagree whether this really shows that determinism is wrong and indeterminism or chance is sufficient to decide freely. Leaving aside this philosophical issue whether a “free will” exists or not, the authors propose a theoretical framework to explain our “experience of a free will.” This framework is based on the predictive brain concept which is not entirely new. Historically, two different models of perception have been developed, one classical view which goes back to the philosophical writings of Plato, St. Augustine, Descartes and assumes that the brain passively absorbs sensory input, processes this information, and reacts with a motor and autonomic response to these passively obtained sensory stimuli (Freeman, 2003). In contrast, a second model of perception, which goes back to Aristotle and Thomas Aquinas, stresses that the brain actively looks for the information it predicts to be present in the environment, based on an intention or goal (Freeman, 2003). The sensed information is used to adjust the initial prediction (=prior belief) to the reality of the environment, resulting in a new adapted belief about the world (posterior belief), by a mechanism known as Bayesian updating. The brain hereby tries to reduce environmental uncertainty, based on the free-energy principle (Friston, 2010). The free-energy principle states that the brain must minimize its informational (=Shannonian) free-energy, i.e., must reduce by the process of perception its uncertainty (its prediction errors) about its environment (Friston, 2010). It does so by using thermodynamic (=Gibbs) free-energy, in other words glucose and oxygen, creating transient structure in neural networks, thereby producing an emergent percept or action plan (De Ridder et al., 2012) (Figure 1A). As completely predictable stimuli do not reduce uncertainty (there is no prediction error) they are not worthwhile of conscious processing. Unpredictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. From an evolutionary point of our experience of “free will” can best be approached by the development of flexible behavioral decision making (Brembs, 2011). Predators can very easily take advantage of deterministic flight reflexes by predicting future prey behavior (Catania, 2009). The opposite, i.e., random behavior is unpredictable but highly inefficient. Thus learning mechanisms evolved to permit flexible behavior as a modification of reflexive behavioral strategies (Brembs, 2011). In order to do so, not one, but multiple representations and action patterns should be generated by the brain, as has already been proposed by von Helmholtz. He found the eye to be optically too poor for vision to be possible, and suggested vision ultimately depended on computational inference, i.e., predictions, based on assumptions and conclusions from incomplete data, relying on previous experiences. The fact that multiple predictions are generated could for example explain the Rubin vase illusion, the Necker cube and the many other stimuli studied in perceptual rivalry, even in monocular rivalry. Which percept or action plan is selected is determined by which prediction is best adapted to the environment that is actively explored (Figure 1A). In this",
"title": ""
},
{
"docid": "fc9b4cb8c37ffefde9d4a7fa819b9417",
"text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.",
"title": ""
},
{
"docid": "6aed3ffa374139fa9c4e0b7c1afb7841",
"text": "Recent longitudinal and cross-sectional aging research has shown that personality traits continue to change in adulthood. In this article, we review the evidence for mean-level change in personality traits, as well as for individual differences in change across the life span. In terms of mean-level change, people show increased selfconfidence, warmth, self-control, and emotional stability with age. These changes predominate in young adulthood (age 20-40). Moreover, mean-level change in personality traits occurs in middle and old age, showing that personality traits can change at any age. In terms of individual differences in personality change, people demonstrate unique patterns of development at all stages of the life course, and these patterns appear to be the result of specific life experiences that pertain to a person's stage of life.",
"title": ""
},
{
"docid": "f5d6f3e0f408cbccfcc5d7da86453d53",
"text": "Financial fraud detection plays a crucial role in the stability of institutions and the economy at large. Data mining methods have been used to detect/flag cases of fraud due to a large amount of data and possible concept drift. In the financial statement fraud detection domain, instances containing missing values are usually discarded from experiments and this may lead to a loss of crucial information. Imputation has been previously ruled out as an option to keep instances with missing values. This paper will examine the impact of imputation in financial statement fraud in two ways. Firstly, seven similarity measures are used to benchmark ground truth data against imputed datasets where seven imputation methods are used. Thereafter, the predictive performance of imputed datasets is compared to the original data classification using three cost-sensitive classifiers: Support Vector Machines, Näıve Bayes and Random Forest.",
"title": ""
},
{
"docid": "3bbf4bd1daaf0f6f916268907410b88f",
"text": "UNLABELLED\nNoncarious cervical lesions are highly prevalent and may have different etiologies. Regardless of their origin, be it acid erosion, abrasion, or abfraction, restoring these lesions can pose clinical challenges, including access to the lesion, field control, material placement and handling, marginal finishing, patient discomfort, and chair time. This paper describes a novel technique for minimizing these challenges and optimizing the restoration of noncarious cervical lesions using a technique the author describes as the class V direct-indirect restoration. With this technique, clinicians can create precise extraoral margin finishing and polishing, while maintaining periodontal health and controlling polymerization shrinkage stress.\n\n\nCLINICAL SIGNIFICANCE\nThe clinical technique described in this article has the potential for being used routinely in treating noncarious cervical lesions, especially in cases without easy access and limited field control. Precise margin finishing and polishing is one of the greatest benefits of the class V direct-indirect approach, as the author has seen it work successfully in his practice over the past five years.",
"title": ""
}
] |
scidocsrr
|
5a8898a69b38590d857af0faca5e6947
|
SEPIC converter based Photovoltaic system with Particle swarm Optimization MPPT
|
[
{
"docid": "470093535d4128efa9839905ab2904a5",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
},
{
"docid": "180dd2107c6a39e466b3d343fa70174f",
"text": "This paper presents simulation and hardware implementation of incremental conductance (IncCond) maximum power point tracking (MPPT) used in solar array power systems with direct control method. The main difference of the proposed system to existing MPPT systems includes elimination of the proportional-integral control loop and investigation of the effect of simplifying the control circuit. Contributions are made in several aspects of the whole system, including converter design, system simulation, controller programming, and experimental setup. The resultant system is capable of tracking MPPs accurately and rapidly without steady-state oscillation, and also, its dynamic performance is satisfactory. The IncCond algorithm is used to track MPPs because it performs precise control under rapidly changing atmospheric conditions. MATLAB and Simulink were employed for simulation studies, and Code Composer Studio v3.1 was used to program a TMS320F2812 digital signal processor. The proposed system was developed and tested successfully on a photovoltaic solar panel in the laboratory. Experimental results indicate the feasibility and improved functionality of the system.",
"title": ""
},
{
"docid": "238c3e34ad2fcb4a4ef9d98aea468bd8",
"text": "Performance of Photovoltaic (PV) system is greatly dependent on the solar irradiation and operating temperature. Due to partial shading condition, the characteristics of a PV system considerably change and often exhibit several local maxima with one global maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily be trapped at local maxima under partial shading. This significantly reduced the energy yield of the PV systems. In order to solve this problem, this paper proposes a Maximum Power Point tracking algorithm based on particle swarm optimization (PSO) that is capable of tracking global MPP under partial shaded conditions. The performance of proposed algorithm is evaluated by means of simulation in MATLAB Simulink. The proposed algorithm is applied to a grid connected PV system, in which a Boost (step up) DC-DC converter satisfactorily tracks the global peak.",
"title": ""
}
] |
[
{
"docid": "72e5b92632824d3633539727125763bc",
"text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.",
"title": ""
},
{
"docid": "8b79816cc07237489dafde316514702a",
"text": "In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.",
"title": ""
},
{
"docid": "0fbd2e65c5d818736486ffb1ec5e2a6d",
"text": "We establish linear profile decompositions for the fourth order linear Schrödinger equation and for certain fourth order perturbations of the linear Schrödinger equation, in dimensions greater than or equal to two. We apply these results to prove dichotomy results on the existence of extremizers for the associated Stein–Tomas/Strichartz inequalities; along the way, we also obtain lower bounds for the norms of these operators.",
"title": ""
},
{
"docid": "8bbbaab2cf7825ca98937de14908e655",
"text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.",
"title": ""
},
{
"docid": "d4878e0d2aaf33bb5d9fc9c64605c4d2",
"text": "Labeled Faces in the Wild (LFW) database has been widely utilized as the benchmark of unconstrained face verification and due to big data driven machine learning methods, the performance on the database approaches nearly 100%. However, we argue that this accuracy may be too optimistic because of some limiting factors. Besides different poses, illuminations, occlusions and expressions, crossage face is another challenge in face recognition. Different ages of the same person result in large intra-class variations and aging process is unavoidable in real world face verification. However, LFW does not pay much attention on it. Thereby we construct a Cross-Age LFW (CALFW) which deliberately searches and selects 3,000 positive face pairs with age gaps to add aging process intra-class variance. Negative pairs with same gender and race are also selected to reduce the influence of attribute difference between positive/negative pairs and achieve face verification instead of attributes classification. We evaluate several metric learning and deep learning methods on the new database. Compared to the accuracy on LFW, the accuracy drops about 10%-17% on CALFW.",
"title": ""
},
{
"docid": "d6d3d2762bc45cc71be488b8e11712a8",
"text": "NAND flash memory is being widely adopted as a storage medium for embedded devices. FTL (Flash Translation Layer) is one of the most essential software components in NAND flash-based embedded devices as it allows to use legacy files systems by emulating the traditional block device interface on top of NAND flash memory.\n In this paper, we propose a novel FTL, called μ-FTL. The main design goal of μ-FTL is to reduce the memory foot-print as small as possible, while providing the best performance by supporting multiple mapping granularities based on variable-sized extents. The mapping information is managed by μ-Tree, which offers an efficient index structure for NAND flash memory. Our evaluation results show that μ-FTL significantly outperforms other block-mapped FTLs with the same memory size by up to 89.7%.",
"title": ""
},
{
"docid": "97b4de3dc73e0a6d7e17f94dff75d7ac",
"text": "Evolution in cloud services and infrastructure has been constantly reshaping the way we conduct business and provide services in our day to day lives. Tools and technologies created to improve such cloud services can also be used to impair them. By using generic tools like nmap, hping and wget, one can estimate the placement of virtual machines in a cloud infrastructure with a high likelihood. Moreover, such knowledge and tools can also be used by adversaries to further launch various kinds of attacks. In this paper we focus on one such specific kind of attack, namely a denial of service (DoS), where an attacker congests a bottleneck network channel shared among virtual machines (VMs) coresident on the same physical node in the cloud infrastructure. We evaluate the behavior of this shared network channel using Click modular router on DETER testbed. We illustrate that game theoretic concepts can be used to model this attack as a two-player game and recommend strategies for defending against such attacks.",
"title": ""
},
{
"docid": "bdb051eb50c3b23b809e06bed81710fc",
"text": "PURPOSE\nTo test the hypothesis that physicians' empathy is associated with positive clinical outcomes for diabetic patients.\n\n\nMETHOD\nA correlational study design was used in a university-affiliated outpatient setting. Participants were 891 diabetic patients, treated between July 2006 and June 2009, by 29 family physicians. Results of the most recent hemoglobin A1c and LDL-C tests were extracted from the patients' electronic records. The results of hemoglobin A1c tests were categorized into good control (<7.0%) and poor control (>9.0%). Similarly, the results of the LDL-C tests were grouped into good control (<100) and poor control (>130). The physicians, who completed the Jefferson Scale of Empathy in 2009, were grouped into high, moderate, and low empathy scorers. Associations between physicians' level of empathy scores and patient outcomes were examined.\n\n\nRESULTS\nPatients of physicians with high empathy scores were significantly more likely to have good control of hemoglobin A1c (56%) than were patients of physicians with low empathy scores (40%, P < .001). Similarly, the proportion of patients with good LDL-C control was significantly higher for physicians with high empathy scores (59%) than physicians with low scores (44%, P < .001). Logistic regression analyses indicated that physicians' empathy had a unique contribution to the prediction of optimal clinical outcomes after controlling for physicians' and patients' gender and age, and patients' health insurance.\n\n\nCONCLUSIONS\nThe hypothesis of a positive relationship between physicians' empathy and patients' clinical outcomes was confirmed, suggesting that physicians' empathy is an important factor associated with clinical competence and patient outcomes.",
"title": ""
},
{
"docid": "e4dd72a52d4961f8d4d8ee9b5b40d821",
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"title": ""
},
{
"docid": "b02f5af836c0d18933de091044ccb916",
"text": "This research presents a mobile augmented reality (MAR) travel guide, named CorfuAR, which supports personalized recommendations. We report the development process and devise a theoretical model that explores the adoption of MAR applications through their emotional impact. A field study on Corfu visitors (n=105) shows that the functional properties of CorfuAR evoke feelings of pleasure and arousal, which, in turn, influence the behavioral intention of using it. This is the first study that empirically validates the relation between functional system properties, user emotions, and adoption behavior. The paper discusses also the theoretical and managerial implications of our study.",
"title": ""
},
{
"docid": "f7f1deeda9730056876db39b4fe51649",
"text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image",
"title": ""
},
{
"docid": "5c76caebe05acd7d09e6cace0cac9fe1",
"text": "A program that detects people in images has a multitude of potential applications, including tracking for biomedical applications or surveillance, activity recognition for person-device interfaces (device control, video games), organizing personal picture collections, and much more. However, detecting people is difficult, as the appearance of a person can vary enormously because of changes in viewpoint or lighting, clothing style, body pose, individual traits, occlusion, and more. It then makes sense that the first people detectors were really detectors of pedestrians, that is, people walking at a measured pace on a sidewalk, and viewed from a fixed camera. Pedestrians are nearly always upright, their arms are mostly held along the body, and proper camera placement relative to pedestrian traffic can virtually ensure a view from the front or from behind (Figure 1). These factors reduce variation of appearance, although clothing, illumination, background, occlusions, and somewhat limited variations of pose still present very significant challenges.",
"title": ""
},
{
"docid": "e45204012e5a12504cbb4831c9b5d629",
"text": "The focus of this paper is the application of the theory of contingent tutoring to the design of a computer-based system designed to support learning in aspects of algebra. Analyses of interactions between a computer-based tutoring system and 42, 14and 15-year-old pupils are used to explore and explain the relations between individual dierences in learner±tutor interaction, learners' prior knowledge and learning outcomes. Parallels between the results of these analyses and empirical investigations of help seeking in adult±child tutoring are drawn out. The theoretical signi®cance of help seeking as a basis for studying the impact of individual learner dierences in the collaborative construction of `zones of proximal development' is assessed. In addition to demonstrating the signi®cance of detailed analyses of learner±system interaction as a basis for inferences about learning processes, the investigation also attempts to show the value of exploiting measures of on-line help seeking as a means of assessing learning transfer. Finally, the implications of the ®ndings for contingency theory are discussed, and the theoretical and practical bene®ts of integrating psychometric assessment, interaction process analyses, and knowledge-based learner modelling in the design and evaluation of computer-based tutoring are explored. # 2000 Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "11c4f0610d701c08516899ebf14f14c4",
"text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.",
"title": ""
},
{
"docid": "8a478da1c2091525762db35f1ac7af58",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "3b1a7539000a8ddabdaa4888b8bb1adc",
"text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.",
"title": ""
},
{
"docid": "32670b62c6f6e7fa698e00f7cf359996",
"text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.",
"title": ""
},
{
"docid": "b3fdd9e446c427022eee637f62ffefa4",
"text": "Software maintenance constitutes a major phase of the software life cycle. Studies indicate that software maintenance is responsible for a significant percentage of a system’s overall cost and effort. The software engineering community has identified four major types of software maintenance, namely, corrective, perfective, adaptive, and preventive maintenance. Software maintenance can be seen from two major points of view. First, the classic view where software maintenance provides the necessary theories, techniques, methodologies, and tools for keeping software systems operational once they have been deployed to their operational environment. Most legacy systems subscribe to this view of software maintenance. The second view is a more modern emerging view, where maintenance is an integral part of the software development process and it should be applied from the early stages in the software life cycle. Regardless of the view by which we consider software maintenance, the fact is that it is the driving force behind software evolution, a very important aspect of a software system. This entry provides an in-depth discussion of software maintenance techniques, methodologies, tools, and emerging trends. Q1",
"title": ""
},
{
"docid": "8ddb7c62f032fb07116e7847e69b51d1",
"text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools",
"title": ""
}
] |
scidocsrr
|
7fe3a246e760cdd7b610f041d15dcc16
|
A visual analytics process for maritime resource allocation and risk assessment
|
[
{
"docid": "1dbb3a49f6c0904be9760f877b7270b7",
"text": "We propose a geographical visualization to support operators of coastal surveillance systems and decision making analysts to get insights in vessel movements. For a possibly unknown area, they want to know where significant maritime areas, like highways and anchoring zones, are located. We show these features as an overlay on a map. As source data we use AIS data: Many vessels are currently equipped with advanced GPS devices that frequently sample the state of the vessels and broadcast them. Our visualization is based on density fields that are derived from convolution of the dynamic vessel positions with a kernel. The density fields are shown as illuminated height maps. Combination of two fields, with a large and small kernel provides overview and detail. A large kernel provides an overview of area usage revealing vessel highways. Details of speed variations of individual vessels are shown with a small kernel, highlighting anchoring zones where multiple vessels stop. Besides for maritime applications we expect that this approach is useful for the visualization of moving object data in general.",
"title": ""
}
] |
[
{
"docid": "960c8a216415307e81eec4e41950c7d2",
"text": "The explosive growth and widespread accessibility of digital health data have led to a surge of research activity in the healthcare and data sciences fields. The conventional approaches for health data management have achieved limited success as they are incapable of handling the huge amount of complex data with high volume, high velocity, and high variety. This article presents a comprehensive overview of the existing challenges, techniques, and future directions for computational health informatics in the big data age, with a structured analysis of the historical and state-of-the-art methods. We have summarized the challenges into four Vs (i.e., volume, velocity, variety, and veracity) and proposed a systematic data-processing pipeline for generic big data in health informatics, covering data capturing, storing, sharing, analyzing, searching, and decision support. Specifically, numerous techniques and algorithms in machine learning are categorized and compared. On the basis of this material, we identify and discuss the essential prospects lying ahead for computational health informatics in this big data age.",
"title": ""
},
{
"docid": "53adce741d07ad54c10eef30cca63db3",
"text": "A new method for deriving limb segment motion from markers placed on the skin is described. The method provides a basis for determining the artifact associated with nonrigid body movement of points placed on the skin. The method is based on a cluster of points uniformly distributed on the limb segment. Each point is assigned an arbitrary mass. The center of mass and the inertia tensor of this cluster of points are calculated. The eigenvalues and eigenvectors of the inertia tensor are used to define a coordinate system in the cluster as well as to provide a basis for evaluating non-rigid body movement. The eigenvalues of the inertia tensor remain invariant if the segment is behaving as a rigid body, thereby providing a basis for determining variations for nonrigid body movement. The method was tested in a simulation model where systematic and random errors were introduced into a fixed cluster of points. The simulation demonstrated that the error due to nonrigid body movement could be substantially reduced. The method was also evaluated in a group of ten normal subjects during walking. The results for knee rotation and translation obtained from the point cluster method compared favorably to results previously obtained from normal subjects with intra-cortical pins placed into the femur and tibia. The resulting methodology described in this paper provides a unique approach to the measurement of in vivo motion using skin-based marker systems.",
"title": ""
},
{
"docid": "c4b48cda893f15d9bd8ad5c213e3f3a2",
"text": "Modern-day computer power is a great servant for today’s information hungry society. The increasing pervasiveness of such powerful machinery greatly influences fundamental information processes such as, for instance, the acquisition of information, its storage, manipulation, retrieval, dissemination, or its usage. Information society depends on these fundamental information processes in various ways. This chapter investigates the diverse and dynamic relationship between information society and the fundamental information processes just mentioned from a modern technology perspective.",
"title": ""
},
{
"docid": "e3cd314541b852734ff133cbd9ca773a",
"text": "Time-triggered (TT) Ethernet is a novel communication system that integrates real-time and non-real-time traffic into a single communication architecture. A TT Ethernet system consists od a set of nodes interconnected by a specific switch called TT Ethernet switch. A node consist of a TT Ethernet communication controller that executes the TT Ethernet protocol and a host computer that executes the user application. The protocol distinguishes between event-triggered (ET) and time-triggered (TT) Ethernet traffic. Time-triggered traffic is scheduled and transmitted with a predictable transmission delay, whereas event-triggered traffic is transmitted on a best-effort basis. The event-triggered traffic in TT Ethernet is handled in conformance with the existing Ethernet standards of the IEEE. This paper presents the design of the TT Ethernet communication controller optimized to be implemented in hardware. The paper describes a prototypical implementation using a custom built hardware platform and presents the results of evaluation experiments.",
"title": ""
},
{
"docid": "3d8401d4801df090f8efebde1b4478ae",
"text": "Quick Response (QR) codes seem to appear everywhere these days. We can see them on posters, magazine ads, websites, product packaging and so on. Using the QR codes is one of the most intriguing ways of digitally connecting consumers to the internet via mobile phones since the mobile phones have become a basic necessity thing of everyone. In this paper, we present a methodology for creating QR codes by which the users enter text into a web browser and get the QR code generated. Drupal module was used in conjunction with the popular libqrencode C library to develop user interface on the web browser and encode data in a QR Code symbol. The experiment was conducted using single and multiple lines of text in both English and Thai languages. The result shows that all QR encoding outputs were successfully and correctly generated.",
"title": ""
},
{
"docid": "e08fc04b3ea61e9ffb88865caae6bb96",
"text": "The online medium has become a significant way that people express their opinions online. Sentiment analysis can be used to find out the polarity of an opinion, such as positive, negative, or neutral. Sentiment analysis has applications such as companies getting their customer's opinions on their products, political sentiment analysis, or opinions on movie reviews. Recent research has involved looking at text from online blogs, tweets, online movie reviews, etc. to try and classify the text as being positive, negative, or neutral. For this research, a feedforward neural network will be experimented with for sentiment analysis of tweets. The training set of tweets are collected using the Twitter API using positive and negative keywords. The testing set of tweets are collected using the same positive and negative keywords.",
"title": ""
},
{
"docid": "3460dbea27f1de0f13636c04bbfb2569",
"text": "The secret keys of critical network authorities -- such as time, name, certificate, and software update services -- represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.",
"title": ""
},
{
"docid": "b8429d68d520906656bb612087b1bce6",
"text": "Saliva has been advocated as an alternative to serum or plasma for steroid monitoring. Little normative information is available concerning expected concentrations of the major reproductive steroids in saliva during pregnancy and the extended postpartum. Matched serum and saliva specimens controlled for time of day and collected less than 30 minutes apart were obtained in 28 women with normal singleton pregnancies between 32 and 38 weeks of gestation and in 43 women during the first six months postpartum. Concentrations of six steroids (estriol, estradiol, progesterone, testosterone, cortisol, dehydroepiandrosterone) were quantified in saliva by enzyme immunoassay. For most of the steroids examined, concentrations in antepartum saliva showed linear increases near end of gestation, suggesting an increase in the bioavailable hormone component. Observed concentrations were in agreement with the limited data available from previous reports. Modal concentrations of the ovarian steroids were undetectable in postpartum saliva and, when detectable in individual women, approximated early follicular phase values. Only low to moderate correlations between the serum and salivary concentrations were found, suggesting that during the peripartum period saliva provides information that is not redundant to serum. Low correlations in the late antepartum may be due to differential rates of change in the total and bioavailable fractions of the circulating steroid in the final weeks of the third trimester as a consequence of dynamic changes in carrier proteins such as corticosteroid binding globulin.",
"title": ""
},
{
"docid": "878bdefc419be3da8d9e18111d26a74f",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "0b231777fedf27659b4558aaabb872be",
"text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "a7e2c35ea12a06dbd31f839297efc535",
"text": "Lane classification is a fundamental problem for autonomous driving and map-aided localization. Many existing algorithms rely on special designed 1D or 2D filters to extract features of lane markings from either color images or LiDAR data. However, these handcrafted features could not be robust under various driving and lighting conditions.\n In this paper, we propose a novel algorithm to fuse color images and LiDAR data together. Our algorithm consists of two stages. In the first stage, we segment road surfaces and register LiDAR data with the corresponding color images. In the second stage, we train convolutional neural networks (CNNs) to classify image patches into lane markings and non-markings. Comparing with the algorithms based on handcrafted features, our algorithm learns a set of kernels to extract and integrate features from two different modalities. The pixel-level classification rate in our experiments shows that our algorithm is robust to different conditions such as shadows and occlusions.",
"title": ""
},
{
"docid": "1d73a95d11552bd661d1993dce7eabdc",
"text": "With the increasing amount of interconnections between vehicles, the attack surface of internal vehicle networks is rising steeply. Although these networks are shielded against external attacks, they often do not have any internal security to protect against malicious components or adversaries who can breach the network perimeter. To secure the in-vehicle network, all communicating components must be authenticated, and only authorized components should be allowed to send and receive messages. This is achieved through the use of an authentication framework. Cryptography is widely used to authenticate communicating parties and provide secure communication channels (e.g., Internet communication). However, the real-time performance requirements of in-vehicle networks restrict the types of cryptographic algorithms and protocols that may be used. In particular, asymmetric cryptography is computationally infeasible during vehicle operation.\n In this work, we address the challenges of designing authentication protocols for automotive systems. We present Lightweight Authentication for Secure Automotive Networks (LASAN), a full lifecycle authentication approach. We describe the core LASAN protocols and show how they protect the internal vehicle network while complying with the real-time constraints and low computational resources of this domain. By leveraging the fixed structure of automotive networks, we minimize bandwidth and computation requirements. Unlike previous work, we also explain how this framework can be integrated into all aspects of the automotive product lifecycle, including manufacturing, vehicle maintenance, and software updates. We evaluate LASAN in two different ways: First, we analyze the security properties of the protocols using established protocol verification techniques based on formal methods. Second, we evaluate the timing requirements of LASAN and compare these to other frameworks using a new highly modular discrete event simulator for in-vehicle networks, which we have developed for this evaluation.",
"title": ""
},
{
"docid": "c9e87ff548ae938c1dbab1528cb550ac",
"text": "Due to their many advantages over their hardwarebased counterparts, Software Defined Radios are becoming the new paradigm for radio and radar applications. In particular, Automatic Dependent Surveillance-Broadcast (ADS-B) is an emerging software defined radar technology, which has been already deployed in Europe and Australia. Deployment in the US is underway as part of the Next Generation Transportation Systems (NextGen). In spite of its several benefits, this technology has been widely criticized for being designed without security in mind, making it vulnerable to numerous attacks. Most approaches addressing this issue fail to adopt a holistic viewpoint, focusing only on part of the problem. In this paper, we propose a methodology that uses semantic technologies to address the security requirements definition from a systemic perspective. More specifically, knowledge engineering focused on misuse scenarios is applied for building customized resilient software defined radar applications, as well as classifying cyber attack severity according to measurable security metrics. We showcase our ideas using an ADS-B-related scenario developed to evaluate",
"title": ""
},
{
"docid": "eb3a07c2295ba09c819c7a998b2fb337",
"text": "Recent advances have demonstrated the potential of network MIMO (netMIMO), which combines a practical number of distributed antennas as a virtual netMIMO AP (nAP) to improve spatial multiplexing of an WLAN. Existing solutions, however, either simply cluster nearby antennas as static nAPs, or dynamically cluster antennas on a per-packet basis so as to maximize the sum rate of the scheduled clients. To strike the balance between the above two extremes, in this paper, we present the design, implementation and evaluation of FlexNEMO, a practical two-phase netMIMO clustering system. Unlike previous per-packet clustering approaches, FlexNEMO only clusters antennas when client distribution and traffic pattern change, as a result being more practical to be implemented. A medium access control protocol is then designed to allow the clients at the center of nAPs to have a higher probability to gain access opportunities, but still ensure long-term fairness among clients. By combining on-demand clustering and priority-based access control, FlexNEMO not only improves antenna utilization, but also optimizes the channel condition for every individual client. We evaluated our design via both testbed experiments on USRPs and trace-driven emulations. The results demonstrate that FlexNEMO can deliver 94.7% and 93.7% throughput gains over static antenna clustering in a 4-antenna testbed and 16-antenna emulation, respectively.",
"title": ""
},
{
"docid": "644d2fcc7f2514252c2b9da01bb1ef42",
"text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1",
"title": ""
},
{
"docid": "56fb6fe1f6999b5d7a9dab19e8b877ef",
"text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.",
"title": ""
},
{
"docid": "8e896b9006ecc82fcfa4f6905a3dc5ae",
"text": "In this paper, we present a generalized Wishart classifier derived from a non-Gaussian model for polarimetric synthetic aperture radar (PolSAR) data. Our starting point is to demonstrate that the scale mixture of Gaussian (SMoG) distribution model is suitable for modeling PolSAR data. We show that the distribution of the sample covariance matrix for the SMoG model is given as a generalization of the Wishart distribution and present this expression in integral form. We then derive the closed-form solution for one particular SMoG distribution, which is known as the multivariate K-distribution. Based on this new distribution for the sample covariance matrix, termed as the K -Wishart distribution, we propose a Bayesian classification scheme, which can be used in both supervised and unsupervised modes. To demonstrate the effect of including non-Gaussianity, we present a detailed comparison with the standard Wishart classifier using airborne EMISAR data.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
},
{
"docid": "b2abd93f4e580ee2e0304432b69f4ae7",
"text": "In this work, we present a Reinforcement Learning (RL) based approach for autonomous driving in highway scenarios, including interaction with other vehicles. The method used is Fitted Q-iteration [1] with Extremely Randomized Trees [2] as a function approximator. We demonstrate that Reinforcement Learning based concepts can be successfully applied and can be used to teach a RL agent to drive autonomously in an intelligent way, by following traffic rules and ensuring safety. By combining RL with the already established control concepts, we managed to build an agent that achieved promising results in the realistic simulated environment.",
"title": ""
}
] |
scidocsrr
|
edb18dc00ca80ac5b300d6e85bf8de94
|
An SMDP-Based Resource Allocation in Vehicular Cloud Computing Systems
|
[
{
"docid": "2e17c3a27d381728be7868aaf2a86281",
"text": "With the proliferation of automobile industry, vehicles are augmented with various forms of increasingly powerful computation, communication, storage and sensing resources. A vehicle therefore can be regarded as “computer-on-wheels”. With such rich resources, it is of great significance to efficiently utilize these resources. This puts forward the vision of vehicular cloud computing. In this paper, we provide an extensive survey of current vehicular cloud computing research and highlight several key issues of vehicular cloud such as architecture, inherent features, service taxonomy and potential applications.",
"title": ""
}
] |
[
{
"docid": "f23bde650be816fdca4594c180c47309",
"text": "Indian economy highly depends on agricultural productivity. An important role is played by the detection of disease to obtain a perfect results in agriculture, and it is natural to have disease in plants. Proper care should be taken in this area for product quality and quantity. To reduce the large amount of monitoring in field automatic detection techniques can be used. This paper discuss different processes for segmentation technique which can be applied for different lesion disease detection. Thresholding and K-means cluster algorithms are done to detect different diseases in plant leaf.",
"title": ""
},
{
"docid": "f7234ac6791f66ed9f9bae55f129bff8",
"text": "How can one summarize a massive data set \"on the fly\", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of \"representativeness\" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.",
"title": ""
},
{
"docid": "a21513f9cf4d5a0e6445772941e9fba2",
"text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.",
"title": ""
},
{
"docid": "c42591e5bb7b4f940a8f505f9f09fe7a",
"text": "Unterstützungssysteme für die Programmierausbildung sind weit verbreitet, doch gängige Standards für den Austausch von allgemeinen (Lern-) Inhalten und Tests erfüllen nicht die speziellen Anforderungen von Programmieraufgaben wie z. B. den Umgang mit komplexen Einreichungen aus mehreren Dateien oder die Kombination verschiedener (automatischer) Bewertungsverfahren. Dadurch können Aufgaben nicht zwischen Systemen ausgetauscht werden, was aufgrund des hohen Aufwands für die Entwicklung guter Aufgaben jedoch wünschenswert wäre. In diesem Beitrag wird ein erweiterbares XMLbasiertes Format zum Austausch von Programmieraufgaben vorgestellt, das bereits von mehreren Systemen prototypisch genutzt wird und das mittelfristig den Austausch über ein gemeinsam genutztes Aufgabenrepository ermöglichen soll. 1 Einleitung und Problemstellung Der Bedarf für Lehrende, Aufgaben für den Einsatz in verschiedenen Lernszenarien aus einem Pool von bestehenden Testinhalten zu nutzen, wächst mit steigendem EAssessment Einsatz an Hochschulen. Dies gilt aufgrund des vergleichsweise hohen Entwicklungsaufwands im besonderen Maße für Programmieraufgaben. Dabei handelt es sich um offene Aufgabentypen, da es wichtig ist, dass die Lernenden eigene Lösungen entwickeln und diese geprüft werden können [Ro07]. Neben der textuellen Beschreibung der jeweiligen Programmieraufgabe sind zur Vorbereitung der (automatisierten) Bewertung oder Generierung von Feedback in der Regel abhängig vom Programmbewertungssystem Tests, wie im Falle der Programmiersprache Java z. B. JUnit-Tests, zu implementieren oder auch im Falle von z. B. SQL oder Prolog Musterlösungen vorzubereiten. Der erforderliche, partiell sehr hohe Aufwand bei der Erstellung derartiger Aufgaben kann deutlich reduziert werden, wenn Testinhalte systemunabhängig genutzt und zwischen den Lehrenden ausgetauscht werden können. Obwohl seit deutlich mehr als zehn Jahren an Unterstützungssystemen für die Programmierausbildung bzw. (automatischen) Bewertungssystemen für unterschiedliche Programmiersprachen gearbeitet wird [KSZ02, Ed04, Hü05, Sp06, Mo07, Tr07, ASS08, HQW08, SBG09, Ih10, SOP11, PJR12, RRH12], gibt es bislang kein gemeinsames Austauschformat. Trotz des großen Aufwands für die Konzeption von (Übungs-)Aufgaben besitzen nur wenige Systeme, die in der Programmierausbildung eingesetzt werden (z. B. DUESIE, eAIXESSOR, JACK), einen auf Wiederverwendung ausgerichteten Aufgabenpool oder streben diesen an. Insbesondere bei Systemen, von denen mehrere Instanzen existieren, könnte zumindest eine Import/Export-Funktion innerhalb dieser Systeme den Arbeitsaufwand durch Wiederverwendung von Aufgaben reduzieren. Dennoch ist ein Imbzw. Export von Aufgaben bislang nur selten möglich (JACK, Praktomat). In anderen Bereichen (z. B. Mathematik) gibt es hingegen Learning Management Systeme (LMS, z. B. LONCAPA), bei denen der (weltweite) Austausch einen zentralen Systembestandteil darstellt. Im Folgenden wird ein Ansatz für ein gemeinsames Austauschformat vorgestellt, das genau diese Lücke für Programmieraufgaben systemübergreifend schließen soll. 2 Anforderungen an ein Austauschformat und existierende Formate In diesem Abschnitt werden anhand von im Einsatz befindlichen Systemen Anforderungen an ein systemübergreifendes Austauschformat hergeleitet, existierende Austauschformate vorgestellt und mit den hergeleiteten Anforderungen verglichen. 2.1 Anforderungen an ein Austauschformat für Programmieraufgaben Programmieraufgaben können in sehr verschiedener Form und sehr unterschiedlichem Umfang gestellt werden: Im einfachsten Fall müssen möglicherweise nur wenige Zeilen Programmcode in einem vorgegebenen Codegerüst ergänzt werden, während in komplexen Fällen zahlreiche Klassen zu erstellen sind, die vorgegebene Schnittstellen realisieren oder selber vorgegebene Schnittstellen ansprechen. Zudem kann es je nach Programmiersprache und Entwicklungsumgebung Verzeichnisund Dateistrukturen geben, die von den Lernenden beachtet werden müssen. Anders als bei geschlossenen Fragetypen, bei denen die korrekten Antworten vorab bekannt sind und die Korrektheit einer Einreichung über einen Soll-Ist-Vergleich bestimmt werden kann, gehören Programmieraufgaben, sofern es sich nicht um sehr einfache Fill-in-the-Gap-Aufgaben handelt, zu den offenen Fragetypen, bei denen eine Einreichung nach verschiedenen Kriterien beurteilt werden kann. Hier kann beispielsweise eine statische Analyse des Quellcodes erfolgen (mit verschiedenen Zielen, z. B. Kompilierfähigkeit, Verwendung bestimmter Konstrukte, Einhaltung eines Programmierstils). Zudem sind dynamische Analysen (z. B. Unitoder Blackbox-Tests), mit denen die Korrektheit eines eingereichten Programms überprüft werden kann, denkbar [Al05]. Für die Durchführung derartiger Analysen werden möglicherweise verschiedene Ausführungsumgebungen oder zusätzliche Dateien benötigt, die für die Lernenden nicht",
"title": ""
},
{
"docid": "439f938d155d9ac44c1aa0981a7c7fe6",
"text": "We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural network (CNN) and use its hidden features to define a feature perceptual loss for VAE training. Evaluated on the CelebA face dataset, we show that our model produces better results than other methods in the literature. We also show that our method can produce latent vectors that can capture the semantic information of face expressions and can be used to achieve state-of-the-art performance in facial attribute prediction.",
"title": ""
},
{
"docid": "00575265d0a6338e3eeb23d234107206",
"text": "We introduce the concept of mode-k generalized eigenvalues and eigenvectors of a tensor and prove some properties of such eigenpairs. In particular, we derive an upper bound for the number of equivalence classes of generalized tensor eigenpairs using mixed volume. Based on this bound and the structures of tensor eigenvalue problems, we propose two homotopy continuation type algorithms to solve tensor eigenproblems. With proper implementation, these methods can find all equivalence classes of isolated generalized eigenpairs and some generalized eigenpairs contained in the positive dimensional components (if there are any). We also introduce an algorithm that combines a heuristic approach and a Newton homotopy method to extract real generalized eigenpairs from the found complex generalized eigenpairs. A MATLAB software package TenEig has been developed to implement these methods. Numerical results are presented to illustrate the effectiveness and efficiency of TenEig for computing complex or real generalized eigenpairs.",
"title": ""
},
{
"docid": "e0d42be891c0278360aad3c07a3f3a8f",
"text": "In this article we compare and integrate two well-established approaches to motivating therapeutic change, namely self-determination theory (SDT; Deci & Ryan, 1985, ) and motivational interviewing (MI; Miller & Rollnick, 1991, ). We show that SDT's theoretical focus on the internalization of therapeutic change and on the issue of need-satisfaction is fully compatible with key principles and clinical strategies within MI. We further suggest that basic need-satisfaction might be an important mechanism accounting for the positive effects of MI. Conversely, MI principles may provide SDT researchers with new insight into the application of SDT's theoretical concept of autonomy-support, and suggest new ways of testing and developing SDT. In short, the applied approach of MI and the theoretical approach of SDT might be fruitfully married, to the benefit of both.",
"title": ""
},
{
"docid": "ff09a72b95fbf3522d4df0f275fb5c3a",
"text": "This paper provides a general overview of solid waste data and management practices employed in Turkey during the last decade. Municipal solid waste statistics and management practices including waste recovery and recycling initiatives have been evaluated. Detailed data on solid waste management practices including collection, recovery and disposal, together with the results of cost analyses, have been presented. Based on these evaluations basic cost estimations on collection and sorting of recyclable solid waste in Turkey have been provided. The results indicate that the household solid waste generation in Turkey, per capita, is around 0.6 kg/year, whereas municipal solid waste generation is close to 1 kg/year. The major constituents of municipal solid waste are organic in nature and approximately 1/4 of municipal solid waste is recyclable. Separate collection programmes for recyclable household waste by more than 60 municipalities, continuing in excess of 3 years, demonstrate solid evidence for public acceptance and continuing support from the citizens. Opinion polls indicate that more than 80% of the population in the project regions is ready and willing to participate in separate collection programmes. The analysis of output data of the Material Recovery Facilities shows that, although paper, including cardboard, is the main constituent, the composition of recyclable waste varies strongly by the source or the type of collection point.",
"title": ""
},
{
"docid": "77872917746b9d177273f178f6e6b0e4",
"text": "Ultra wideband (UWB) technology based primarily on the impulse radio paradigm has a huge potential for revolutionizing the world of digital communications especially wireless communications. UWB provides the integrated capabilities of data communications, advanced radar and precision tracking, location, imperceptibility and low power operation. It is therefore ideally suited for the development of robust and rapid wireless networks in complex and hostile environments. The distinct physical layer properties of the UWB technology warrants efficient design of medium access control (MAC) protocols. This paper introduces the unique UWB physical characteristics compared to the existing wireless technologies and discusses current research on MAC protocols for UWB. This report surveys most of the MAC protocols proposed so far for UWB, and may instigate further activities on this important and evolving technology.",
"title": ""
},
{
"docid": "6de2b5fa5c8d3db9f9d599b6ebb56782",
"text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens (dries.huygens@ugent.be) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).",
"title": ""
},
{
"docid": "0660b717561bedaa8d6da4f59266fabe",
"text": "Printed quasi-Yagi antennas [1] have been used in a number of applications requiring broad-band planar end-fire antennas. So far they have been mostly realized on high dielectric constant substrates with moderate thickness in order to excite the TE0 surface wave along the dielectric substrate. An alternative design of a printed Yagi-Uda antenna, developed on a low dielectric constant material, was presented in [2]. In this design, an additional director and a reflector were used to increase the gain of the antenna. However the achieved bandwidth of the antenna is quite narrow (about 3–4%) compared to the bandwidth of a quasi-Yagi antenna fabricated on a high dielectric constant substrate [1]. Another disadvantage of a conventional quasi-Yagi antenna fabricated on a low dielectric permittivity substrate is that the length of the driver is increased and it is difficult to achieve 0.5 λ0 spacing between the elements required for scanning arrays, where λ0 corresponds to a free-space wavelength at the center frequency of the antenna.",
"title": ""
},
{
"docid": "150ad4c49d10be14bf2f1a653a245498",
"text": "Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states \"improved the cohesion of class C\", but no quality metric captures such an improvement).",
"title": ""
},
{
"docid": "6058813ab7c5a2504faea224b9f32bba",
"text": "LinkedIn, with over 1.5 million Groups, has become a popular place for business employees to create private groups to exchange information and communicate. Recent research on social networking sites (SNSs) has widely explored the phenomenon and its positive effects on firms. However, social networking’s negative effects on information security were not adequately addressed. Supported by the credibility, persuasion and motivation theories, we conducted 1) a field experiment, demonstrating how sensitive organizational data can be exploited, followed by 2) a qualitative study of employees engaged in SNSs activities; and 3) interviews with Chief Information Security Officers (CISOs). Our research has resulted in four main findings: 1) employees are easily deceived and susceptible to victimization on SNSs where contextual elements provide psychological triggers to attackers; 2) organizations lack mechanisms to control SNS online security threats, 3) companies need to strengthen their information security policies related to SNSs, where stronger employee identification and authentication is needed, and 4) SNSs have become important security holes where, with the use of social engineering techniques, malicious attacks are easily facilitated.",
"title": ""
},
{
"docid": "2b34e56c74396b968591dcc7cb839b10",
"text": "Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values, which implicitly encourages the target rank constraint. Our experimental analyses show that, when the number of samples is deficient, our approach leads to a higher success rate than conventional rank minimization, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g., high dynamic range imaging, motion edge detection, photometric stereo, image alignment and recovery, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.",
"title": ""
},
{
"docid": "f465475eb7bb52d455e3ed77b4808d26",
"text": "Background Long-term dieting has been reported to reduce resting energy expenditure (REE) leading to weight regain once the diet has been curtailed. Diets are also difficult to follow for a significant length of time. The purpose of this preliminary proof of concept study was to examine the effects of short-term intermittent dieting during exercise training on REE and weight loss in overweight women.",
"title": ""
},
{
"docid": "114f23172377fadf945b7a7632908ae0",
"text": "Scene understanding is an important prerequisite for vehicles and robots that operate autonomously in dynamic urban street scenes. For navigation and high-level behavior planning, the robots not only require a persistent 3D model of the static surroundings-equally important, they need to perceive and keep track of dynamic objects. In this paper, we propose a method that incrementally fuses stereo frame observations into temporally consistent semantic 3D maps. In contrast to previous work, our approach uses scene flow to propagate dynamic objects within the map. Our method provides a persistent 3D occupancy as well as semantic belief on static as well as moving objects. This allows for advanced reasoning on objects despite noisy single-frame observations and occlusions. We develop a novel approach to discover object instances based on the temporally consistent shape, appearance, motion, and semantic cues in our maps. We evaluate our approaches to dynamic semantic mapping and object discovery on the popular KITTI benchmark and demonstrate improved results compared to single-frame methods.",
"title": ""
},
{
"docid": "6f1669cf7fe464c42b5cb0d68efb042e",
"text": "BACKGROUND\nLevine and Drennan described the tibial metaphyseal-diaphyseal angle (MDA) in an attempt to identify patients with infantile Blount's disease. Pediatric orthopaedic surgeons have debated not only the use, but also the reliability of this measure. Two techniques have been described to measure the MDA. These techniques involved using both the lateral border of the tibial cortex and the center of the tibial shaft as the longitudinal axis for radiographic measurements. The use of digital images poses another variable in the reliability of the MDA as digital images are used more commonly.\n\n\nMETHODS\nThe radiographs of 21 children (42 limbs) were retrospectively reviewed by 27 staff pediatric orthopaedic surgeons. Interobserver reliability was determined using the intraclass correlation coefficients (ICCs). Nine duplicate radiographs (18 duplicate limbs) that appeared in the data set were used to calculate ICCs representing the intraobserver reliability. A scatter plot was created comparing the mean MDA determined by the 2 methods. The strength of a linear relationship between the 2 methods was measured with the Pearson correlation coefficient. Finally, we tested for a difference in variability between the 2 measures at angles of 11 degrees or less and greater than 11 degrees by comparing the variance ratios using the F test.\n\n\nRESULTS\nThe interobserver reliability was calculated using the ICC as 0.821 for the single-measure method and 0.992 for the average-measure method. The intraobserver reliability was similarly calculated using the ICC as 0.886 for the single-measure method and 0.940 for the average-measure method. Pearson correlation coefficient (0.9848) revealed a highly linear relationship between the 2 methods (P = 0.00001). We also found that there was no statistically significant variability between the 2 methods of calculating the MDA at angles of 11 degrees or less compared with angles greater than 11 degrees (P = 0.596688).\n\n\nCONCLUSIONS\nThere was excellent interobserver reliability and intraobserver reliability among reviewers. Using either the lateral diaphyseal line or center diaphyseal line produces reasonable reliability with no significant variability at angles of 11 degrees or less or greater than 11 degrees.\n\n\nLEVEL OF EVIDENCE\nLevel IV.",
"title": ""
},
{
"docid": "a698752bf7cf82e826848582816b1325",
"text": "The incidence and context of stotting were studied in Thomson's gazelles. Results suggested that gazelles were far more likely to stot in response to coursing predators, such as wild dogs, than they were to stalking predators, such as cheetahs. During hunts, gazelles that wild dogs selected stotted at lower rates than those they did not select. In addition, those which were chased, but which outran the predators, were more likely to stot, and stotted for longer durations, than those which were chased and killed. In response to wild dogs, gazelles in the dry season, which were probably in poor condition, were less likely to stot, and stotted at lower rates, than those in the wet season. We suggest that stotting could be an honest signal of a gazelle's ability to outrun predators, which coursers take into account when selecting prey.",
"title": ""
},
{
"docid": "3ecd1c083d256c7fd88991f1e442cb8b",
"text": "It has long been observed that database management systems focus on traditional business applications, and that few people use a database management system outside their workplace. Many have wondered what it will take to enable the use of data management technology by a broader class of users and for a much wider range of applications.\n Google Fusion Tables represents an initial answer to the question of how data management functionality that focused on enabling new users and applications would look in today's computing environment. This paper characterizes such users and applications and highlights the resulting principles, such as seamless Web integration, emphasis on ease of use, and incentives for data sharing, that underlie the design of Fusion Tables. We describe key novel features, such as the support for data acquisition, collaboration, visualization, and web-publishing.",
"title": ""
},
{
"docid": "ddd40af5cb0ed773a9db8f5584cb561e",
"text": "The aims of the present study were to examine: 1) the validity and reliability of a new timing system to assess running kinematics during change of direction (COD), and 2) the determinants of COD-speed. Twelve young soccer players performed three 20-m sprints, either in straight line or with one 45oor 90oCOD. Sprints were monitored using timing gates and two synchronized 100-Hz laser guns, to track players’ velocities before, during and after the COD. The validity analysis revealed trivial-to-small biases and smallto-moderate typical errors of the estimate with the lasers compared with the timing gates. The reliability was variable-dependent, with trivial(distance at peak speed) to-large (distance at peak deceleration) typical errors. Kinematic variables were angle-dependent, with likely lower peak speed, almost-certainly slower minimum speed during the COD and almost-certainly greater deceleration reached for 90o-COD vs. 45oCOD sprints. The minimum speed during the COD was largely correlated with sprint performance for both sprint angles. Correlations with most of the other independent variables were unclear. The new timing system showed acceptable levels of validity and reliability to assess some of the selected running kinematics during COD sprints. The ability to maintain a high speed during the COD may be the determinant of COD-speed.",
"title": ""
}
] |
scidocsrr
|
cc2f7f19bfa1b6cc7a99cfcc8e50bbeb
|
Varying Linguistic Purposes of Emoji in (Twitter) Context
|
[
{
"docid": "911ea52fa57524e002154e2fe276ac44",
"text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.",
"title": ""
},
{
"docid": "16a0750449d0c01080740588e73c2a5e",
"text": "Emojis are a quickly spreading and rather unknown communication phenomenon which occasionally receives attention in the mainstream press, but lacks the scientific exploration it deserves. This paper is a first attempt at investigating the global distribution of emojis. We perform our analysis of the spatial distribution of emojis on a dataset of ∼17 million (and growing) geo-encoded tweets containing emojis by running a cluster analysis over countries represented as emoji distributions and performing correlation analysis of emoji distributions and World Development Indicators. We show that emoji usage tends to draw quite a realistic picture of the living conditions in various parts of our world.",
"title": ""
}
] |
[
{
"docid": "6adb3d2e49fa54679c4fb133a992b4f7",
"text": "Kathleen McKeown1, Hal Daume III2, Snigdha Chaturvedi2, John Paparrizos1, Kapil Thadani1, Pablo Barrio1, Or Biran1, Suvarna Bothe1, Michael Collins1, Kenneth R. Fleischmann3, Luis Gravano1, Rahul Jha4, Ben King4, Kevin McInerney5, Taesun Moon6, Arvind Neelakantan8, Diarmuid O’Seaghdha7, Dragomir Radev4, Clay Templeton3, Simone Teufel7 1Columbia University, 2University of Maryland, 3University of Texas at Austin, 4University of Michigan, 5Rutgers University, 6IBM, 7Cambridge University, 8University of Massachusetts at Amherst",
"title": ""
},
{
"docid": "607247339e5bb0299f06db3104deef77",
"text": "This paper discusses the advantages of using the ACT-R cognitive architecture over the Prolog programming language for the research and development of a large-scale, functional, cognitively motivated model of natural language analysis. Although Prolog was developed for Natural Language Processing (NLP), it lacks any probabilistic mechanisms for dealing with ambiguity and relies on failure detection and algorithmic backtracking to explore alternative analyses. These mechanisms are problematic for handling ill-formed or unexpected inputs, often resulting in an exploration of the entire search space, which becomes intractable as the complexity and variability of the allowed inputs and corresponding grammar grow. By comparison, ACT-R provides context dependent and probabilistic mechanisms which allow the model to incrementally pursue the best analysis. When combined with a nonmonotonic context accommodation mechanism that supports modest adjustment of the evolving analysis to handle cases where the locally best analysis is not globally preferred, the result is an efficient pseudo-deterministic mechanism that obviates the need for failure detection and backtracking, aligns with our basic understanding of Human Language Processing (HLP) and is scalable to broad coverage. The successful transition of the natural language analysis model from Prolog to ACT-R suggests that a cognitively motivated approach to natural language analysis may also be suitable for achieving a functional capability.",
"title": ""
},
{
"docid": "dce63433a9900b9b4e6d9d420713b38d",
"text": "Pathogenic microorganisms must cope with extremely low free-iron concentrations in the host's tissues. Some fungal pathogens rely on secreted haemophores that belong to the Common in Fungal Extracellular Membrane (CFEM) protein family, to extract haem from haemoglobin and to transfer it to the cell's interior, where it can serve as a source of iron. Here we report the first three-dimensional structure of a CFEM protein, the haemophore Csa2 secreted by Candida albicans. The CFEM domain adopts a novel helical-basket fold that consists of six α-helices, and is uniquely stabilized by four disulfide bonds formed by its eight signature cysteines. The planar haem molecule is bound between a flat hydrophobic platform located on top of the helical basket and a peripheral N-terminal ‘handle’ extension. Exceptionally, an aspartic residue serves as the CFEM axial ligand, and so confers coordination of Fe3+ haem, but not of Fe2+ haem. Histidine substitution mutants of this conserved Asp acquired Fe2+ haem binding and retained the capacity to extract haem from haemoglobin. However, His-substituted CFEM proteins were not functional in vivo and showed disturbed haem exchange in vitro, which suggests a role for the oxidation-state-specific Asp coordination in haem acquisition by CFEM proteins.",
"title": ""
},
{
"docid": "2cea3c0621b1ac332a6eb305661c077b",
"text": "Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.",
"title": ""
},
{
"docid": "2897897e683e94b921799e72ebf99b4a",
"text": "Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {−1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ∼16× memory saving and ∼6× real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ∼10.5× memory saving and ∼3× real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.",
"title": ""
},
{
"docid": "34e544af5158850b7119ac4f7c0b7b5e",
"text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "84037cd25cb12f6f823da8170a843f75",
"text": "This paper presents a topology-based representation dedicated to complex indoor scenes. It accounts for memory management and performances during modelling, visualization and lighting simulation. We propose to enlarge a topological model (called generalized maps) with multipartition and hierarchy. Multipartition allows the user to group objects together according to semantics. Hierarchy provides a coarse-to-fine description of the environment. The topological model we propose has been used for devising a modeller prototype and generating efficient data structure in the context of visualization, global illumination and 1 GHz wave propagation simulation. We presently handle buildings composed of up to one billion triangles.",
"title": ""
},
{
"docid": "211b2a146aba4161aac649551ad613f6",
"text": "Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development.",
"title": ""
},
{
"docid": "a51b57427c5204cb38483baa9389091f",
"text": "Cross-laminated timber (CLT), a new generation of engineered wood product developed initially in Europe, has been gaining popularity in residential and non-residential applications in several countries. Numerous impressive lowand mid-rise buildings built around the world using CLT showcase the many advantages that this product can offer to the construction sector. This article provides basic information on the various attributes of CLT as a product and as structural system in general, and examples of buildings made of CLT panels. A road map for codes and standards implementation of CLT in North America is included, along with an indication of some of the obstacles that can be expected.",
"title": ""
},
{
"docid": "c69ce70eebe0a3dd89a66b0a9d599019",
"text": "In this paper by utilizing the capabilities of modern ubiquitous operating systems we introduce a comprehensive framework for a ubiquitous translation and language learning environment for English to Sanskrit Machine Translation. We present an application for learning Sanskrit characters, sentences and English Sanskrit translation. For the implementation, we have used the open-source Android platform on the Samsung Mini2440, a state-of-the-art development board. We present our current state of implementation, the architecture of our framework,and the findings we have gathered so far. In addition to this, here we describes the Phrase-Based Statistical Machine Translation Decoder for English to Sanskrit translation in ubiquitous environment. Our goal is to improve the translation quality by enhancing the translation table and by preprocessing the Sanskrit language text .",
"title": ""
},
{
"docid": "29f820ea99905ad1ee58eb9d534c89ab",
"text": "Basic results in the rigorous theory of weighted dynamical zeta functions or dynamically defined generalized Fredholm determinants are presented. Analytic properties of the zeta functions or determinants are related to statistical properties of the dynamics via spectral properties of dynamical transfer operators, acting on Banach spaces of observables.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "0beec77d16aae48a2679be775f8116b1",
"text": "The aim of the study was to compare fertility potential in patients who had been operated upon in childhood because of unilateral or bilateral cryptorchidism. The study covered 68 men (age 25–30 years) with a history of unilateral (49) or bilateral orchidopexy (Mandat et al. in Eur J Pediatr Surg 4:94–97, 1994). Fertility potential was estimated with semen analysis (sperm concentration, motility and morphology), testicular volume measurement and hormonal status evaluation [follicle-stimulating hormone (FSH) and inhibin B levels]. Differences were analysed with the nonparametric Mann–Whitney test. The group of subjects with bilateral orchidopexy had significantly decreased sperm concentration (P = 0.047), sperm motility (P = 0.003), inhibin B level (P = 0.036) and testicular volume (P = 0.040), compared to subjects with unilateral orchidopexy. In the group with bilateral orchidopexy, there was a strong negative correlation between inhibin B and FSH levels (P < 0.001, r s = −0.772). Sperm concentration in this group correlated positively with inhibin B level (P = 0.004, r s = 0.627) and negatively with FSH level (P = 0.04, r s = −0.435). The group of subjects with unilateral orchidopexy who had been operated before the age of 8 years had significantly increased inhibin B level (P = 0.006) and testicular volume (P = 0.007) and decreased FSH level (P = 0.01), compared to subjects who had been operated at the age of 8 or later. Men who underwent bilateral orchidopexy in their childhood have appreciably poorer prognosis for fertility compared to men who underwent a unilateral procedure. Our study also confirmed that men who underwent unilateral orchidopexy in their childhood before the age of 8 years have better prognosis for fertility compared to those who were operated later.",
"title": ""
},
{
"docid": "71ac019a7305529bd353ddca8b4573ef",
"text": "In this paper we will discuss progress in the area of thread scheduling for multiprocessors, including systems which a re Chip-MultiProcessors (CMP), can perform Simultaneous Mul tiThreading (SMT), and/or support multiple threads to execute in parallel. The reviewed papers approach thread sched uling from the aspects of resource utilization, thread priori ty, Operating System (OS) effects, and interrupts. The metrics used by the discussed papers will be summarized.",
"title": ""
},
{
"docid": "2cebd2fd12160d2a3a541989293f10be",
"text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.",
"title": ""
},
{
"docid": "85693811a951a191d573adfe434e9b18",
"text": "Diagnosing problems in data centers has always been a challenging problem due to their complexity and heterogeneity. Among recent proposals for addressing this challenge, one promising approach leverages provenance, which provides the fundamental functionality that is needed for performing fault diagnosis and debugging—a way to track direct and indirect causal relationships between system states and their changes. This information is valuable, since it permits system operators to tie observed symptoms of a faults to their potential root causes. However, capturing provenance in a data center is challenging because, at high data rates, it would impose a substantial cost. In this paper, we introduce techniques that can help with this: We show how to reduce the cost of maintaining provenance by leveraging structural similarities for compression, and by offloading expensive but highly parallel operations to hardware. We also discuss our progress towards transforming provenance into compact actionable diagnostic decisions to repair problems caused by misconfigurations and program bugs.",
"title": ""
},
{
"docid": "b2a2fdf56a79c1cb82b8b3a55b9d841d",
"text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.",
"title": ""
},
{
"docid": "a94f4add9893057509a8bafeb8ec698b",
"text": "Advances in software defined radio (SDR) technology allow unprecedented control on the entire processing chain, allowing modification of each functional block as well as sampling the changes in the input waveform. This article describes a method for uniquely identifying a specific radio among nominally similar devices using a combination of SDR sensing capability and machine learning (ML) techniques. The key benefit of this approach is that ML operates on raw I/Q samples and distinguishes devices using only the transmitter hardware-induced signal modifications that serve as a unique signature for a particular device. No higher-level decoding, feature engineering, or protocol knowledge is needed, further mitigating challenges of ID spoofing and coexistence of multiple protocols in a shared spectrum. The contributions of the article are as follows: (i) The operational blocks in a typical wireless communications processing chain are modified in a simulation study to demonstrate RF impairments, which we exploit. (ii) Using an overthe- air dataset compiled from an experimental testbed of SDRs, an optimized deep convolutional neural network architecture is proposed, and results are quantitatively compared with alternate techniques such as support vector machines and logistic regression. (iii) Research challenges for increasing the robustness of the approach, as well as the parallel processing needs for efficient training, are described. Our work demonstrates up to 90-99 percent experimental accuracy at transmitter- receiver distances varying between 2-50 ft over a noisy, multi-path wireless channel.",
"title": ""
},
{
"docid": "c5d9b3cf2332e06c883dc2f41e0f2ae8",
"text": "We assess the reliability of isobaric-tags for relative and absolute quantitation (iTRAQ), based on different types of replicate analyses taking into account technical, experimental, and biological variations. In total, 10 iTRAQ experiments were analyzed across three domains of life involving Saccharomyces cerevisiae KAY446, Sulfolobus solfataricus P2, and Synechocystis sp. PCC 6803. The coverage of protein expression of iTRAQ analysis increases as the variation tolerance increases. In brief, a cutoff point at +/-50% variation (+/-0.50) would yield 88% coverage in quantification based on an analysis of biological replicates. Technical replicate analysis produces a higher coverage level of 95% at a lower cutoff point of +/-30% variation. Experimental or iTRAQ variations exhibit similar behavior as biological variations, which suggest that most of the measurable deviations come from biological variations. These findings underline the importance of replicate analysis as a validation tool and benchmarking technique in protein expression analysis.",
"title": ""
}
] |
scidocsrr
|
dc3319cf70205698dba85808a1e7a37e
|
Videolization: knowledge graph based automated video generation from web content
|
[
{
"docid": "9d918a69a2be2b66da6ecf1e2d991258",
"text": "We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
},
{
"docid": "1aa51d3ef39773eb3250564ae87c6205",
"text": "relatedness between terms using the links found within their corresponding Wikipedia articles. Unlike other techniques based on Wikipedia, WLM is able to provide accurate measures efficiently, using only the links between articles rather than their textual content. Before describing the details, we first outline the other systems to which it can be compared. This is followed by a description of the algorithm, and its evaluation against manually-defined ground truth. The paper concludes with a discussion of the strengths and weaknesses of the new approach. Abstract",
"title": ""
}
] |
[
{
"docid": "9d0c8bbf1156c84d0100dc3b6c0b57dd",
"text": "Purpose – The paper aims to look at some of the problems commonly associated with qualitative methodologies, suggesting that there is a need for a more rigorous application in order to develop theory and aid effective decision making. Design/methodology/approach – The paper examines three qualitative methodologies: grounded theory, ethnography, and phenomenology. It compares and contrasts their approaches to data collection and interpretation and highlights some of the strengths and weaknesses associated with each one. Findings – The paper suggests that, while qualitative methodologies, as opposed to qualitative methods, are now an accepted feature of consumer research, their application in the truest sense is still in its infancy within the broader field of marketing. It proposes a number of possible contexts that may benefit from in-depth qualitative enquiry. Originality/value – The paper should be of interest to marketers considering adopting a qualitative perspective, possibly for the first time, as it offers a snap-shot of three widely-used methodologies, their associated procedures and potential pitfalls.",
"title": ""
},
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "83c87294c33601023fdd0624d2dacecc",
"text": "In modern road surveys, hanging power cables are among the most commonly-found geometric features. These cables are catenary curves that are conventionally modelled with three parameters in 2D Cartesian space. With the advent and popularity of the mobile mapping system (MMS), the 3D point clouds of hanging power cables can be captured within a short period of time. These point clouds, similarly to those of planar features, can be used for feature-based self-calibration of the system assembly errors of an MMS. However, to achieve this, a well-defined 3D equation for the catenary curve is needed. This paper proposes three 3D catenary curve models, each having different parameters. The models are examined by least squares fitting of simulated data and real data captured with an MMS. The outcome of the fitting is investigated in terms of the residuals and correlation matrices. Among the proposed models, one of them could estimate the parameters accurately and without any extreme correlation between the variables. This model can also be applied to those transmission lines captured by airborne laser scanning or any other hanging cable-like objects.",
"title": ""
},
{
"docid": "3da6c20ba154de6fbea24c3cbb9c8ebb",
"text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "6db42ad3f24c25d17e0de0bdc5a62104",
"text": "Robust low-level image features have proven to be effective representations for a variety of high-level visual recognition tasks, such as object recognition and scene classification. But as the visual recognition tasks become more challenging, the semantic gap between low-level feature representation and the meaning of the scenes increases. In this paper, we propose to use objects as attributes of scenes for scene classification. We represent images by collecting their responses to a large number of object detectors, or “object filters”. Such representation carries high-level semantic information rather than low-level image feature information, making it more suitable for high-level visual recognition tasks. Using very simple, off-the-shelf classifiers such as SVM, we show that this object-level image representation can be used effectively for high-level visual tasks such as scene classification. Our results are superior to reported state-of-the-art performance on a number of standard datasets.",
"title": ""
},
{
"docid": "7941642359c725a96847c012aa11a84e",
"text": "We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain non-asymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.",
"title": ""
},
{
"docid": "bca27f6e44d64824a0be41d5f2beea4d",
"text": "In Infrastructure-as-a-Service (IaaS) clouds, intrusion detection systems (IDSes) increase their importance. To securely detect attacks against virtual machines (VMs), IDS offloading with VM introspection (VMI) has been proposed. In semi-trusted clouds, however, it is difficult to securely offload IDSes because there may exist insiders such as malicious system administrators. First, secure VM execution cannot coexist with IDS offloading although it has to be enabled to prevent information leakage to insiders. Second, offloaded IDSes can be easily disabled by insiders. To solve these problems, this paper proposes IDS remote offloading with remote VMI. Since IDSes can run at trusted remote hosts outside semi-trusted clouds, they cannot be disabled by insiders in clouds. Remote VMI enables IDSes at remote hosts to introspect VMs via the trusted hypervisor inside semi-trusted clouds. Secure VM execution can be bypassed by performing VMI in the hypervisor. Remote VMI preserves the integrity and confidentiality of introspected data between the hypervisor and remote hosts. The integrity of the hypervisor can be guaranteed by various existing techniques. We have developed RemoteTrans for remotely offloading legacy IDSes and confirmed that RemoteTrans could achieve surprisingly efficient execution of legacy IDSes at remote hosts.",
"title": ""
},
{
"docid": "00547f45936c7cea4b7de95ec1e0fbcd",
"text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.",
"title": ""
},
{
"docid": "e60d699411055bf31316d468226b7914",
"text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )",
"title": ""
},
{
"docid": "fa2c86d4c0716580415fce8db324fd04",
"text": "One of the key elements in describing a software development method is the roles that are assigned to the members of the software team. This article describes our experience in assigning roles to students who are involved in the development of software projects, working in Extreme Programming teams. This experience, which is based on 25 such projects, teaches us that a personal role for each teammate increases personal responsibility while maintaining the essence of the software development method. In this paper we discuss ways in which different software development methods address the place of roles in a software development team. We also share our experience in refining role specifications and suggest a way to achieve and measure progress by using the perspective of the different roles.",
"title": ""
},
{
"docid": "1670dda371458257c8f86390b398b3f8",
"text": "Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the wordlike unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named GaussianLDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that GaussianLDA achieves better performance than other compared methods. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e0b3ef309047e59849d5f4381603b378",
"text": "Thermistor characteristic equation directly determines the temperature measurement accuracy. Three fitting equation of NTC thermistors and their corresponding mathematic solutions are introduced. An adaptive algorithm based on cross-validation is proposed to determine the degree of chebyshev polynomials equation. The experiment indicates that the method of least squares for Steinhart-Hart equation and chebyshev polynomials equation has higher accuracy, and the equation determined by adaptive algorithm for the chebyshev polynomials method has better performance.",
"title": ""
},
{
"docid": "46e0cfd4cb292331cb1f6a746a3ed3b7",
"text": "Indoor human tracking is fundamental to many real-world applications such as security surveillance, behavioral analysis, and elderly care. Previous solutions usually require dedicated device being carried by the human target, which is inconvenient or even infeasible in scenarios such as elderly care and break-ins. However, compared with device-based tracking, device-free tracking is particularly challenging because the much weaker reflection signals are employed for tracking. The problem becomes even more difficult with commodity Wi-Fi devices, which have limited number of antennas, small bandwidth size, and severe hardware noise.\n In this work, we propose IndoTrack, a device-free indoor human tracking system that utilizes only commodity Wi-Fi devices. IndoTrack is composed of two innovative methods: (1) Doppler-MUSIC is able to extract accurate Doppler velocity information from noisy Wi-Fi Channel State Information (CSI) samples; and (2) Doppler-AoA is able to determine the absolute trajectory of the target by jointly estimating target velocity and location via probabilistic co-modeling of spatial-temporal Doppler and AoA information. Extensive experiments demonstrate that IndoTrack can achieve a 35cm median error in human trajectory estimation, outperforming the state-of-the-art systems and provide accurate location and velocity information for indoor human mobility and behavioral analysis.",
"title": ""
},
{
"docid": "5bccbca8ab1f586defa9f6c0922dcc32",
"text": "A number of companies and standards development organizations have, since 2000, been producing products and standards for \"time-sensitive networks\" to support real-time applications that require a) zero packet loss due to buffer congestion, b) extremely low packet loss due to equipment failure, and c) guaranteed upper bounds on end-to-end latency. Often, a robust capability for time synchronization to less than 1 μs is also required. These networks consist of specially-featured bridges that are interconnected using standard Ethernet links with standard MAC/PHY layers. Since 2012, this technology has advanced to the use of routers, as well as bridges, and features of interest to time-sensitive networks have been added to both Ethernet and wireless standards.",
"title": ""
},
{
"docid": "e9017607252973b36f9d4c3c659fe858",
"text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————",
"title": ""
},
{
"docid": "b277765cf0ced8162b6f05cc8f91fb71",
"text": "Questions and their corresponding answers within a community based question answering (CQA) site are frequently presented as top search results forWeb search queries and viewed by millions of searchers daily. The number of answers for CQA questions ranges from a handful to dozens, and a searcher would be typically interested in the different suggestions presented in various answers for a question. Yet, especially when many answers are provided, the viewer may not want to sift through all answers but to read only the top ones. Prior work on answer ranking in CQA considered the qualitative notion of each answer separately, mainly whether it should be marked as best answer. We propose to promote CQA answers not only by their relevance to the question but also by the diversification and novelty qualities they hold compared to other answers. Specifically, we aim at ranking answers by the amount of new aspects they introduce with respect to higher ranked answers (novelty), on top of their relevance estimation. This approach is common in Web search and information retrieval, yet it was not addressed within the CQA settings before, which is quite different from classic document retrieval. We propose a novel answer ranking algorithm that borrows ideas from aspect ranking and multi-document summarization, but adapts them to our scenario. Answers are ranked in a greedy manner, taking into account their relevance to the question as well as their novelty compared to higher ranked answers and their coverage of important aspects. An experiment over a collection of Health questions, using a manually annotated gold-standard dataset, shows that considering novelty for answer ranking improves the quality of the ranked answer list.",
"title": ""
},
{
"docid": "15886d83be78940609c697b30eb73b13",
"text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.",
"title": ""
},
{
"docid": "2b6087cab37980b1363b343eb0f81822",
"text": "We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.",
"title": ""
},
{
"docid": "1eb415cae9b39655849537cdc007f51f",
"text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.",
"title": ""
}
] |
scidocsrr
|
c151d7b69fe246e8d94135fed336fb1a
|
How Parenting Style Influences Children : A Review of Controlling , Guiding , and Permitting Parenting Styles on Children ’ s Behavior , Risk-Taking , Mental Health , and Academic Achievement
|
[
{
"docid": "5cdb981566dfd741c9211902c0c59d50",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
}
] |
[
{
"docid": "d4ea09e7c942174c0301441a5c53b4ef",
"text": "As the cloud computing is a new style of computing over internet. It has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related with the load management, fault tolerance and different security issues in cloud environment. In this paper the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. Many methods to resolve this problem has been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony optimization to resolve the problem of load balancing in cloud environment.",
"title": ""
},
{
"docid": "f0957d315153e4101d3e3838d9060e30",
"text": "The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced Deep Echo State Network (DeepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of DeepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions of recurrent layers, i.e. on the bias of depth in RNNs architectural design. In this paper, we summarize the advancements in the development, analysis and applications of DeepESNs.",
"title": ""
},
{
"docid": "b81c03c329e1a1bb319e99e9882dbf96",
"text": "While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based on the order entry processing of TPC-C and a corresponding TPC-H-equivalent OLAP query suite run in parallel on the same tables in a single database system. As it is derived from these two most widely used TPC benchmarks, the CH-benCHmark produces results highly relevant to both hybrid and classic single-workload systems.",
"title": ""
},
{
"docid": "3b0f5d827a58fc6077e7c304cd2d35b8",
"text": "BACKGROUND\nPatients suffering from depression experience significant mood, anxiety, and cognitive symptoms. Currently, most antidepressants work by altering neurotransmitter activity in the brain to improve these symptoms. However, in the last decade, research has revealed an extensive bidirectional communication network between the gastrointestinal tract and the central nervous system, referred to as the \"gut-brain axis.\" Advances in this field have linked psychiatric disorders to changes in the microbiome, making it a potential target for novel antidepressant treatments. The aim of this review is to analyze the current body of research assessing the effects of probiotics, on symptoms of depression in humans.\n\n\nMETHODS\nA systematic search of five databases was performed and study selection was completed using the preferred reporting items for systematic reviews and meta-analyses process.\n\n\nRESULTS\nTen studies met criteria and were analyzed for effects on mood, anxiety, and cognition. Five studies assessed mood symptoms, seven studies assessed anxiety symptoms, and three studies assessed cognition. The majority of the studies found positive results on all measures of depressive symptoms; however, the strain of probiotic, the dosing, and duration of treatment varied widely and no studies assessed sleep.\n\n\nCONCLUSION\nThe evidence for probiotics alleviating depressive symptoms is compelling but additional double-blind randomized control trials in clinical populations are warranted to further assess efficacy.",
"title": ""
},
{
"docid": "719c1b6ad0d945b68b34abceb1ed8e3b",
"text": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.",
"title": ""
},
{
"docid": "37b5a10646e741f8b7430a2037f6a472",
"text": "Web pages often contain clutter (such as pop-up ads, unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of \"useful and relevant\" content from web pages has many applications, including cell phone and PDA browsing, speech rendering for the visually impaired, and text summarization. Most approaches to removing clutter or making content more readable involve changing font size or removing HTML and data components such as images, which takes away from a webpage's inherent look and feel. Unlike \"Content Reformatting\", which aims to reproduce the entire webpage in a more convenient form, our solution directly addresses \"Content Extraction\". We have developed a framework that employs easily extensible set of techniques that incorporate advantages of previous work on content extraction. Our key insight is to work with the DOM trees, rather than with raw HTML markup. We have implemented our approach in a publicly available Web proxy to extract content from HTML web pages.",
"title": ""
},
{
"docid": "c6ff28e06120ae3114b61d74fdcc0603",
"text": "This paper deals with an integrated starter-alternator (ISA) drive which exhibits a high torque for the engine start, a wide constant-power speed range for the engine speedup, and a high-speed generator mode operation for electric energy generation. Peculiarities of this ISA drive are thus its flux-weakening capability and the possibility to large torque overload at low speed. The focus on the design, analysis, and test of an interior permanent-magnet motor and drive for a prototype of ISA is given in this paper. In details, this paper reports on the design of stator and rotor geometries, the results of finite-element computations, the description of control system, and the experimental results of prototype tests.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "15d7279c9bb80181d0075425b5f4516d",
"text": "Although the radio access network (RAN) part of mobile networks offers a significant opportunity for benefiting from the use of SDN ideas, this opportunity is largely untapped due to the lack of a software-defined RAN (SD-RAN) platform. We fill this void with FlexRAN, a flexible and programmable SD-RAN platform that separates the RAN control and data planes through a new, custom-tailored southbound API. Aided by virtualized control functions and control delegation features, FlexRAN provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. We implement FlexRAN as an extension to a modified version of the OpenAirInterface LTE platform, with evaluation results indicating the feasibility of using FlexRAN under the stringent time constraints posed by the RAN. To demonstrate the effectiveness of FlexRAN as an SD-RAN platform and highlight its applicability for a diverse set of use cases, we present three network services deployed over FlexRAN focusing on interference management, mobile edge computing and RAN sharing.",
"title": ""
},
{
"docid": "44ea81d223e3c60c7b4fd1192ca3c4ba",
"text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes",
"title": ""
},
{
"docid": "0c7f2f7554927d61fbad7f2cb1045b03",
"text": "This paper reports a mobile application pre-launch scheme that is based on user's emotion. Smartphone application's usage and smartwatch's internal sensors are exploited to predict user's intension. User's emotion can be extracted from the PPG sensor in the smartwatch. In this paper, we extend previous App pre-launch service with user's emotion data. Applying machine learning algorithm to the training data, we can predict the application to be executed in near future. With our emotion context, we expect we can predict user's intension more accurately.",
"title": ""
},
{
"docid": "8453758f7ff533a1f4fa74db5630d8de",
"text": "Introduction This book presents methods for spatial data modeling, algorithms, access methods, and query processing. The main focus is on extending DBMS technology to accommodate spatial data. The book also includes spatial methods used in Geographic Information Systems (GISs). Historically, GISs developed separately from Database systems. GISs are specialized in all aspects of spatial data handling including spatial editing, re-projecting coordinate systems, and map display. However, their query abilities generally are limited or require low-level programming. Contrary to GIS software is the approach to include spatial data in DBMSs using ADTs and extensibility.",
"title": ""
},
{
"docid": "ed78bc1fde0cdd17a2989a58db3f173a",
"text": "One of the major steps for opinion mining is to extract product features. The vast majority of existing approaches focus on explicit feature identification, few attempts have been made to identify implicit features in reviews, however; people tend to express their opinions with simple structures and brachylogies, which lead to more implicit features in reviews. By analyzing the characteristics of product reviews in Chinese on the Internet, this paper proposes a novel context-based implicit feature extracting method. We extract the implicit features according to the opinion words and the similarity between the product features in the implicit features' context. We also build a matrix to show the relationship between opinion words and product features, then use a new algorithm to filter the noises in the matrix. Experiments show that our method provides higher accuracy in extracting the implicit features.",
"title": ""
},
{
"docid": "6f53b3c2aca2756a69d94f14eb98c715",
"text": "Rapid field evaluation of RO feed filtration requirements, selection of effective antiscalant type and dose, and estimation of suitable scale-free RO recovery level were demonstrated using a novel approach based on direct observation of mineral scaling and flux decline measurements, utilizing an automated Membrane Monitor (MeMo). The MeMo, operated in a stand-alone single-pass desalting mode, enabled rapid assessment of the adequacy of feed filtration by enabling direct observation of particulate deposition on the membrane surface. The diagnostic field study with RO feed water of high mineral scaling propensity revealed (via direct MeMo observation) that suspended particulates (even for feed water of turbidity <1 NTU) could serve as seeds for promoting surface crystal nucleation. With feed filtration optimized, a suitable maximum RO water recovery, with complete mineral scale suppression facilitated by an effective antiscalant dose, can be systematically and directly identified (via MeMo) in the field for a given feed water quality. Scale-free operating conditions, determined via standalone MeMo rapid diagnostic tests, were shown to be applicable to spiral-would RO system as validated via both flux decline measurements and ex-situ RO plant membrane scale monitoring. It was shown that the present approach is suitable for rapid field assessment of RO operability and it is particularly advantageous when evaluating water sources of composition that may vary both temporally and across the regions of interest.",
"title": ""
},
{
"docid": "3d59f488d91af8b9d204032a8d4f65c8",
"text": "Robotic grasp detection for novel objects is a challenging task, but for the last few years, deep learning based approaches have achieved remarkable performance improvements, up to 96.1% accuracy, with RGB-D data. In this paper, we propose fully convolutional neural network (FCNN) based methods for robotic grasp detection. Our methods also achieved state-of-the-art detection accuracy (up to 96.6%) with state-ofthe-art real-time computation time for high-resolution images (6-20ms per 360×360 image) on Cornell dataset. Due to FCNN, our proposed method can be applied to images with any size for detecting multigrasps on multiobjects. Proposed methods were evaluated using 4-axis robot arm with small parallel gripper and RGB-D camera for grasping challenging small, novel objects. With accurate vision-robot coordinate calibration through our proposed learning-based, fully automatic approach, our proposed method yielded 90% success rate.",
"title": ""
},
{
"docid": "e678d969670a3630a9535fb2814d96b6",
"text": "Selecting the most appropriate data examples to present a deep neural network (DNN) at different stages of training is an unsolved challenge. Though practitioners typically ignore this problem, a non-trivial data scheduling method may result in a significant improvement in both convergence and generalization performance. In this paper, we introduce Self-Paced Learning with Adaptive Deep Visual Embeddings (SPL-ADVisE), a novel end-to-end training protocol that unites self-paced learning (SPL) and deep metric learning (DML). We leverage the Magnet Loss to train an embedding convolutional neural network (CNN) to learn a salient representation space. The student CNN classifier dynamically selects similar instance-level training examples to form a mini-batch, where the easiness from the cross-entropy loss and the true diverseness of examples from the learned metric space serve as sample importance priors. To demonstrate the effectiveness of SPL-ADVisE, we use deep CNN architectures for the task of supervised image classification on several coarseand fine-grained visual recognition datasets. Results show that, across all datasets, the proposed method converges faster and reaches a higher final accuracy than other SPL variants, particularly on fine-grained classes.",
"title": ""
},
{
"docid": "3916e752fffbd121f5224a49883729d9",
"text": "Photovoltaic power plants (PVPPs) typically operate by tracking the maximum power point (MPP) in order to maximize the conversion efficiency. However, with the continuous increase of installed grid-connected PVPPs, power system operators have been experiencing new challenges, such as overloading, overvoltages, and operation during grid-voltage disturbances. Consequently, constant power generation (CPG) is imposed by grid codes. An algorithm for the calculation of the photovoltaic panel voltage reference, which generates a constant power from the PVPP, is introduced in this paper. The key novelty of the proposed algorithm is its applicability for both single- and two-stage PVPPs and flexibility to move the operation point to the right or left side of the MPP. Furthermore, the execution frequency of the algorithm and voltage increments between consecutive operating points are modified based on a hysteresis band controller in order to obtain fast dynamic response under transients and low-power oscillation during steady-state operation. The performance of the proposed algorithm for both single- and two-stage PVPPs is examined on a 50-kVA simulation setup of these topologies. Moreover, experimental results on a 1-kVA PV system validate the effectiveness of the proposed algorithm under various operating conditions, demonstrating functionalities of the proposed CPG algorithm.",
"title": ""
},
{
"docid": "d15ce9f62f88a07db6fa427fae61f26c",
"text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.",
"title": ""
},
{
"docid": "fba48672e859a7606707406267dd0957",
"text": "We suggest a spectral histogram, defined as the marginal distribution of filter responses, as a quantitative definition for a texton pattern. By matching spectral histograms, an arbitrary image can be transformed to an image with similar textons to the observed. We use the chi(2)-statistic to measure the difference between two spectral histograms, which leads to a texture discrimination model. The performance of the model well matches psychophysical results on a systematic set of texture discrimination data and it exhibits the nonlinearity and asymmetry phenomena in human texture discrimination. A quantitative comparison with the Malik-Perona model is given, and a number of issues regarding the model are discussed.",
"title": ""
},
{
"docid": "5b703f3d16b733795181a9e7ad5235ea",
"text": "This paper addresses the lack of a commonly used, standard dataset and established benchmarking problems for physical activity monitoring. A new dataset - recorded from 18 activities performed by 9 subjects, wearing 3 IMUs and a HR-monitor - is created and made publicly available. Moreover, 4 classification problems are benchmarked on the dataset, using a standard data processing chain and 5 different classifiers. The benchmark shows the difficulty of the classification tasks and exposes new challenges for physical activity monitoring.",
"title": ""
}
] |
scidocsrr
|
c4e44387ea101cc4a15940d7ec6f1dad
|
Face recognition based on curvelets and local binary pattern features via using local property preservation
|
[
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "0b6c8d79180a4a17d4da661d6ab0b983",
"text": "The online social media such as Facebook, Twitter and YouTube has been used extensively during disaster and emergency situation. Despite the advantages offered by these services on supplying information in vague situation by citizen, we raised the issue of spreading misinformation on Twitter by using retweets. Accordingly, in this study, we conduct a user survey (n = 133) to investigate what is the user’s action towards spread message in Twitter, and why user decide to perform retweet on the spread message. As the result of the factor analyses, we extracted 3 factors on user’s action towards spread message which are: 1) Desire to spread the retweet messages as it is considered important, 2) Mark the retweet messages as favorite using Twitter “Favorite” function, and 3) Search for further information about the content of the retweet messages. Then, we further analyze why user decides to perform retweet. The results reveal that user has desire to spread the message which they think is important and the reason why they retweet it is because of the need to retweet, interesting tweet content and the tweet user. The results presented in this paper provide an understanding on user behavior of information diffusion, with the aim to reduce the spread of misinformation using Twitter during emergency situation.",
"title": ""
},
{
"docid": "d612ca22b9895c0e85f2b64327a1b22c",
"text": "Physical inactivity has been associated with increasing prevalence and mortality of cardiovascular and other diseases. The purpose of this study is to identify if there is an association between, self–efficacy, mental health, and physical inactivity among university students. The study comprises of 202 males and 692 females age group 18-25 years drawn from seven faculties selected using a table of random numbers. Questionnaires were used for the data collection. The findings revealed that the prevalence of physical inactivity among the respondents was 41.4%. Using a univariate analysis, the study showed that there was an association between gender (female), low family income, low self-efficacy, respondents with mental health probable cases and physical inactivity (p<0.05).Using a multivariate analysis, physical inactivity was higher among females(OR = 3.72, 95% CI = 2.399-5.788), low family income (OR = 4.51, 95% CI = 3.266 – 6.241), respondents with mental health probable cases (OR = 1.58, 95% CI = 1.1362.206) and low self-efficacy for pysical activity(OR = 1.86, 95% CI = 1.350 2.578).Conclusively there is no significant decrease in physical inactivity among university students when compared with previous studies in this population, it is therefore recommended that counselling on mental health, physical activity awareness among new university students should be encouraged. Keyword:Exercise,Mental Health, Self-Efficacy,Physical Inactivity, University students",
"title": ""
},
{
"docid": "4d3baff85c302b35038f35297a8cdf90",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "2a00a75924b608f053476e8b28b4ce0f",
"text": "This study aims to investigate the influence of different patterns of collaboration on the citation impact of Harvard University’s publications. Those documents published by researchers affiliated with Harvard University in WoS from 2000–2009, constituted the population of the research which was counted for 124,937 records. Based on the results, only 12% of Harvard publications were single author publications. Different patterns of collaboration were investigated in different subject fields. In all 22 examined fields, the number of co-authored publications is much higher than single author publications. In fact, more than 60% of all publications in each field are multi-author publications. Also, the normalized citation per paper for co-authored publications is higher than that of single author publications in all fields. In addition, the largest number of publications in all 22 fields were also published through inter-institutional collaboration and were as a result of collaboration among domestic researchers and not international ones. In general, the results of the study showed that there was a significant positive correlation between the number of authors and the number of citations in Harvard publications. In addition, publications with more number of institutions have received more number of citations, whereas publications with more number of foreign collaborators were not much highly cited.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "dcf24a58fe16912556de7d9f5395dba9",
"text": "This review provides detailed insight on the effects of magnetic fields on germination, growth, development, and yield of plants focusing on ex vitro growth and development and discussing the possible physiological and biochemical responses. The MFs considered in this review range from the nanoTesla (nT) to geomagnetic levels, up to very strong MFs greater than 15 Tesla (T) and also super-weak MFs (near 0 T). The theoretical bases of the action of MFs on plant growth, which are complex, are not discussed here and thus far, there is limited mathematical background about the action of MFs on plant growth. MFs can positively influence the morphogenesis of several plants which allows them to be used in practical situations. MFs have thus far been shown to modify seed germination and affect seedling growth and development in a wide range of plants, including field, fodder, and industrial crops; cereals and pseudo-cereals; grasses; herbs and medicinal plants; horticultural crops (vegetables, fruits, ornamentals); trees; and model crops. This is important since MFs may constitute a non-residual and non-toxic stimulus. In addition to presenting and summarizing the effects of MFs on plant growth and development, we also provide possible physiological and biochemical explanations for these responses including stress-related responses of plants, explanations based on dia-, para-, and ferromagnetism, oriented movements of substances, and cellular and molecular changes.",
"title": ""
},
{
"docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2",
"text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).",
"title": ""
},
{
"docid": "6be8c4872607b888028bf9ce3da65d49",
"text": "This paper represents the design and analysis of Star Fractal Antenna meta-material antennas. They are designed on a defected finite ground. It forms a inductive and capacitive circuit. It has left hand characteristics of meta-material. The fractal concept helps in providing multiband from 5.65GHz to 17.50 GHz with return loss better than -10dB. It resulted in desired gain and many resonant frequency. HFSS v 13 is used for the simulation. This paper presents antennas with high gain and multiband applications.",
"title": ""
},
{
"docid": "00bfe328340d225e637de6c3f35f78f8",
"text": "Automatic multi-document summarization is a process of generating a summary that contains the most important information from multiple documents. In this thesis, we design an automatic multi-document summarization system using different abstraction-based methods and submodularity. Our proposed model considers summarization as a budgeted submodular function maximization problem. The model integrates three important measures of a summary namely importance, coverage, and non-redundancy, and we design a submodular function for each of them. In addition, we integrate sentence compression and sentence merging. When evaluated on the DUC 2004 data set, our generic summarizer has outperformed the state-of-the-art summarization systems in terms of ROUGE-1 recall and f1-measure. For query-focused summarization, we used the DUC 2007 data set where our system achieves statistically similar results to several well-established methods in terms of the ROUGE-2 measure.",
"title": ""
},
{
"docid": "2fdd3e223a0b3baa48345bec12d85530",
"text": "Lung cancer is a highly heterogeneous disease in terms of both underlying genetic lesions and response to therapeutic treatments. We performed deep whole-genome sequencing and transcriptome sequencing on 19 lung cancer cell lines and three lung tumor/normal pairs. Overall, our data show that cell line models exhibit similar mutation spectra to human tumor samples. Smoker and never-smoker cancer samples exhibit distinguishable patterns of mutations. A number of epigenetic regulators, including KDM6A, ASH1L, SMARCA4, and ATAD2, are frequently altered by mutations or copy number changes. A systematic survey of splice-site mutations identified 106 splice site mutations associated with cancer specific aberrant splicing, including mutations in several known cancer-related genes. RAC1b, an isoform of the RAC1 GTPase that includes one additional exon, was found to be preferentially up-regulated in lung cancer. We further show that its expression is significantly associated with sensitivity to a MAP2K (MEK) inhibitor PD-0325901. Taken together, these data present a comprehensive genomic landscape of a large number of lung cancer samples and further demonstrate that cancer-specific alternative splicing is a widespread phenomenon that has potential utility as therapeutic biomarkers. The detailed characterizations of the lung cancer cell lines also provide genomic context to the vast amount of experimental data gathered for these lines over the decades, and represent highly valuable resources for cancer biology.",
"title": ""
},
{
"docid": "eb5043aa57e6140bca2722a590eec656",
"text": "The estimation of correspondences between two images resp. point sets is a core problem in computer vision. One way to formulate the problem is graph matching leading to the quadratic assignment problem which is NP-hard. Several so called second order methods have been proposed to solve this problem. In recent years hypergraph matching leading to a third order problem became popular as it allows for better integration of geometric information. For most of these third order algorithms no theoretical guarantees are known. In this paper we propose a general framework for tensor block coordinate ascent methods for hypergraph matching. We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices. In the experiments we show that our new algorithms outperform previous work both in terms of achieving better matching scores and matching accuracy. This holds in particular for very challenging settings where one has a high number of outliers and other forms of noise.",
"title": ""
},
{
"docid": "e14b936ecee52765078d77088e76e643",
"text": "In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.",
"title": ""
},
{
"docid": "b3ddcc6dbe3e118dfd0630feb42713c9",
"text": "This thesis details the use of a programmable logic device to increase the playing strength of a chess program. The time–consuming task of generating chess moves is relegated to hardware in order to increase the processing speed of the search algorithm. A simpler inter–square connection protocol reduces the number of wires between chess squares, when compared to the Deep Blue design. With this interconnection scheme, special chess moves are easily resolved. Furthermore, dynamically programmable arbiters are introduced for optimal move ordering. Arbiter centrality is also shown to improve move ordering, thereby creating smaller search trees. The move generator is designed to allow the integration of crucial move ordering heuristics. With its new hardware move generator, the chess program’s playing ability is noticeably improved.",
"title": ""
},
{
"docid": "615b7c70053b36f30be66df2374194d1",
"text": "OBJECTIVE\nto evaluate sleep quality and daytime sleepiness of residents and medical students.\n\n\nMETHODS\nwe applied a socio-demographic questionnaire, the Pittsburgh Sleep Quality Index (PSQI) and the Epworth Sleepiness Scale (ESS) to a population of residents and medical students.\n\n\nRESULTS\nhundred five residents and 101 undergraduate medical students participated. Residents presented higher mean PSQI (6.76±2.81) with poorer sleep quality when compared with undergraduates (5.90±2.39); Both had similar measures of sleepiness by ESS (p=0.280), but residents showed lower duration and lower subjective sleep quality.\n\n\nCONCLUSION\nmedical students and residents presented sleep deprivation, indicating the need for preventive actions in the medical area.",
"title": ""
},
{
"docid": "29ccb1a24069d94c21b4e26b6ece0046",
"text": "The business market has been undergoing a paradigmatic change. The rise of the Internet, market fragmentation, and increasing global competition is changing the “value” that business marketers provide. This paradigmatic transformation requires changes in the way companies are organized to create and deliver value to their customers. Business marketers have to continuously increase their contribution to the value chain. If not, value migrates from a given business paradigm (e.g., minicomputers and DEC) to alternate business paradigms (e.g., flexibly manufactured PCs and Dell). This article focuses on ways in which business marketers are creating value in the Internet and digital age. Examples from business marketers are discussed and managerial implications are highlighted. © 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "0dc5a8b5b0c3d8424b510f5910f26976",
"text": "In 1992, Tani et al. proposed remotely operating machines in a factory by manipulating a live video image on a computer screen. In this paper we revisit this metaphor and investigate its suitability for mobile use. We present Touch Projector, a system that enables users to interact with remote screens through a live video image on their mobile device. The handheld device tracks itself with respect to the surrounding displays. Touch on the video image is \"projected\" onto the target display in view, as if it had occurred there. This literal adaptation of Tani's idea, however, fails because handheld video does not offer enough stability and control to enable precise manipulation. We address this with a series of improvements, including zooming and freezing the video image. In a user study, participants selected targets and dragged targets between displays using the literal and three improved versions. We found that participants achieved highest performance with automatic zooming and temporary image freezing.",
"title": ""
},
{
"docid": "a57caf61fdae1ab9c1fc4d944ebe03cd",
"text": "The handiness and ease of use of tele-technology like mobile phones has surged the growth of ICT in developing countries like India than ever. Mobile phones are showing overwhelming responses and have helped farmers to do the work on timely basis and stay connected with the outer farming world. But mobile phones are of no use when it comes to the real-time farm monitoring or accessing the accurate information because of the little research and application of mobile phone in agricultural field for such uses. The current demand of use of WSN in agricultural fields has revolutionized the farming experiences. In Precision Agriculture, the contribution of WSN are numerous staring from monitoring soil health, plant health to the storage of crop yield. Due to pressure of population and economic inflation, a lot of pressure is on farmers to produce more out of their fields with fewer resources. This paper gives brief insight into the relation of plant disease prediction with the help of wireless sensor networks. Keywords— Plant Disease Monitoring, Precision Agriculture, Environmental Parameters, Wireless Sensor Network (WSN)",
"title": ""
},
{
"docid": "6f6733c35f78b00b771cf7099c953954",
"text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.",
"title": ""
},
{
"docid": "dab15cc440d17efc5b3d5b2454cac591",
"text": "The performance of a circular patch antenna with slotted ground plane for body centric communication mainly in the health care monitoring systems for Onbody application is researched. The CP antenna is intended for utilization in UWB, body centric communication applications i.e. in between 3.1 to 10.6 GHz. The proposed antenna is CP antenna of (30 x 30 x 1.6) mm. It is simulated via CST microwave studio suite. This CP antenna covers the entire ultra wide frequency range (3.9174-13.519) GHz (9.6016) GHz with the VSWR of (3.818 GHz13.268 GHz). Antenna’s group delay is to be observed as 3.5 ns. The simulated results of antenna are given in terms of , VSWR, group delay and radiation pattern. Keywords— UWB, Body Worn Antenna, BodyCentric Communication.",
"title": ""
}
] |
scidocsrr
|
6fbcb0ba4ca8f6f13292a47456230607
|
A Practical Roadside Camera Calibration Method Based on Least Squares Optimization
|
[
{
"docid": "81ef390009fb64bf235147bc0e186bab",
"text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.",
"title": ""
},
{
"docid": "5e182532bfd10dee3f8d57f14d1f4455",
"text": "Camera calibrating is a crucial problem for further metric scene measurement. Many techniques and some studies concerning calibration have been presented in the last few years. However, it is still di1cult to go into details of a determined calibrating technique and compare its accuracy with respect to other methods. Principally, this problem emerges from the lack of a standardized notation and the existence of various methods of accuracy evaluation to choose from. This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation. Furthermore, the techniques surveyed have been tested and their accuracy evaluated. Comparative results are shown and discussed in the article. Moreover, code and results are available in internet. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "8cf336a0d57681f55b9fadbb769996a4",
"text": "Games-based learning has captured the interest of educationalists and industrialists who seek to exploit the characteristics of computer games as they are perceived by some to be a potentially effective approach for teaching and learning. Despite this interest in using games-based learning there is a dearth of empirical evidence supporting the validity of the approach covering the wider context of gaming and education. This study presents a large scale gaming survey, involving 887 students from 13 different Higher Education (HE) institutes in Scotland and the Netherlands, which examines students’ characteristics related to their gaming preferences, game playing habits, and their perceptions and thoughts on the use of games in education. It presents a comparison of three separate groups of students: a group in regular education in a Scottish university, a group in regular education in universities in the Netherlands and a distance learning group from a university in the Netherlands. This study addresses an overall research question of: Can computer games be used for educational purposes at HE level in regular and distance education in different countries? The study then addresses four sub-research questions related to the overall research question: What are the different game playing habits of the three groups? What are the different motivations for playing games across the three groups? What are the different reasons for using games in HE across the three groups? What are the different attitudes towards games across the three groups? To our knowledge this is the first in-depth cross-national survey on gaming and education. We found that a large number of participants believed that computer games could be used at HE level for educational purposes and that further research in the area of game playing habits, motivations for playing computer games and motivations for playing computer games in education are worthy of extensive further investigation. We also found a clear distinction between the views of students in regular education and those in distance education. Regular education students in both countries rated all motivations for playing computer games as significantly more important than distance education students. Also the results suggest that Scottish students aim to enhance their social experience with regards to competition and cooperation, while Dutch students aim to enhance their leisurely experience with regards to leisure, feeling good, preventing boredom and excitement. 2013 Elsevier Ltd. All rights reserved. Hainey), wim.westera@ou.nl (W. Westera), thomas.connolly@uws.ac.uk (T.M. Connolly), gavin.baxter@uws.ac.uk",
"title": ""
},
{
"docid": "35272c838889e6b9645059168c6c5858",
"text": "Several decades of research in computer and primate vision have resulted in many models (some specialized for one problem, others more general) and invaluable experimental data. Here, to help focus research efforts onto the hardest unsolved problems, and bridge computer and human vision, we define a battery of 5 tests that measure the gap between human and machine performances in several dimensions (generalization across scene categories, generalization from images to edge maps and line drawings, invariance to rotation and scaling, local/global information with jumbled images, and object recognition performance). We measure model accuracy and the correlation between model and human error patterns. Experimenting over 7 datasets, where human data is available, and gauging 14 well-established models, we find that none fully resembles humans in all aspects, and we learn from each test which models and features are more promising in approaching humans in the tested dimension. Across all tests, we find that models based on local edge histograms consistently resemble humans more, while several scene statistics or \"gist\" models do perform well with both scenes and objects. While computer vision has long been inspired by human vision, we believe systematic efforts, such as this, will help better identify shortcomings of models and find new paths forward.",
"title": ""
},
{
"docid": "a60d79008bfb7cccee262667b481d897",
"text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.",
"title": ""
},
{
"docid": "f4db5b7cc70661ff780c96cd58f6624e",
"text": "Error Thresholds and Their Relation to Optimal Mutation Rates p. 54 Are Artificial Mutation Biases Unnatural? p. 64 Evolving Mutation Rates for the Self-Optimisation of Genetic Algorithms p. 74 Statistical Reasoning Strategies in the Pursuit and Evasion Domain p. 79 An Evolutionary Method Using Crossover in a Food Chain Simulation p. 89 On Self-Reproduction and Evolvability p. 94 Some Techniques for the Measurement of Complexity in Tierra p. 104 A Genetic Neutral Model for Quantitative Comparison of Genotypic Evolutionary Activity p. 109",
"title": ""
},
{
"docid": "4c607b142149504c2edad475d5613b86",
"text": "This study uses a metatriangulation approach to explore the relationships between power and information technology impacts, development or deployment, and management or use in a sample Jasperson et al./Power & IT Research 398 MIS Quarterly Vol. 26 No. 4/December 2002 of 82 articles from 12 management and MIS journals published between 1980 and 1999. We explore the multiple paradigms underlying this research by applying two sets of lenses to examine the major findings from our sample. The technological imperative, organizational imperative , and emergent perspectives (Markus and Robey 1988) are used as one set of lenses to better understand researchers' views regarding the causal structure between IT and organizational power. A second set of lenses, which includes the rational, pluralist, interpretive, and radical perspectives (Bradshaw-Camball and Murray 1991), is used to focus on researchers' views of the role of power and different IT outcomes. We apply each lens separately to describe patterns emerging from the previous power and IT studies. In addition, we discuss the similarities and differences that occur when the two sets of lenses are simultaneously applied. We draw from this discussion to develop metaconjectures, (i.e., propositions that can be interpreted from multiple perspectives), and to suggest guidelines for studying power in future research.",
"title": ""
},
{
"docid": "be017adea5e5c5f183fd35ac2ff6b614",
"text": "In nationally representative yearly surveys of United States 8th, 10th, and 12th graders 1991-2016 (N = 1.1 million), psychological well-being (measured by self-esteem, life satisfaction, and happiness) suddenly decreased after 2012. Adolescents who spent more time on electronic communication and screens (e.g., social media, the Internet, texting, gaming) and less time on nonscreen activities (e.g., in-person social interaction, sports/exercise, homework, attending religious services) had lower psychological well-being. Adolescents spending a small amount of time on electronic communication were the happiest. Psychological well-being was lower in years when adolescents spent more time on screens and higher in years when they spent more time on nonscreen activities, with changes in activities generally preceding declines in well-being. Cyclical economic indicators such as unemployment were not significantly correlated with well-being, suggesting that the Great Recession was not the cause of the decrease in psychological well-being, which may instead be at least partially due to the rapid adoption of smartphones and the subsequent shift in adolescents' time use. (PsycINFO Database Record",
"title": ""
},
{
"docid": "eefb6ec5984b6641baedecc0bf3b44c4",
"text": "Gradient descent is prevalent for large-scale optimization problems in machine learning; especially it nowadays plays a major role in computing and correcting the connection strength of neural networks in deep learning. However, many gradient-based optimization methods contain more sensitive hyper-parameters which require endless ways of configuring. In this paper, we present a novel adaptive mechanism called adaptive exponential decay rate (AEDR). AEDR uses an adaptive exponential decay rate rather than a fixed and preconfigured one, and it can allow us to eliminate one otherwise tuning sensitive hyper-parameters. AEDR also can be used to calculate exponential decay rate adaptively by employing the moving average of both gradients and squared gradients over time. The mechanism is then applied to Adadelta and Adam; it reduces the number of hyper-parameters of Adadelta and Adam to only a single one to be turned. We use neural network of long short-term memory and LeNet to demonstrate how learning rate adapts dynamically. We show promising results compared with other state-of-the-art methods on four data sets, the IMDB (movie reviews), SemEval-2016 (sentiment analysis in twitter) (IMDB), CIFAR-10 and Pascal VOC-2012.",
"title": ""
},
{
"docid": "c349eccb9a6d5b13289e2b24b1003cce",
"text": "A new hybrid model which combines wavelets and Artificial Neural Network (ANN) called wavelet neural network (WNN) model was proposed in the current study and applied for time series modeling of river flow. The time series of daily river flow of the Malaprabha River basin (Karnataka state, India) were analyzed by the WNN model. The observed time series are decomposed into sub-series using discrete wavelet transform and then appropriate sub-series is used as inputs to the neural network for forecasting hydrological variables. The hybrid model (WNN) was compared with the standard ANN and AR models. The WNN model was able to provide a good fit with the observed data, especially the peak values during the testing period. The benchmark results from WNN model applications showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models (ANN and AR).",
"title": ""
},
{
"docid": "1ac8e3098f8ae082d2c0de658fc208e1",
"text": "The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has only recently been tackled from the robotics community. Most of the studies published so far make use of tool representations that allow their models to generalize the knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In the current paper we present a method that tackles both issues simultaneously by using an extended set of functional features and a novel representation of the effect of the tool use. Together, they implicitly account for the grasping configuration and allow the iCub to generalize among tools based on their geometry. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool's affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.",
"title": ""
},
{
"docid": "395362cb22b0416e8eca67ec58907403",
"text": "This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.",
"title": ""
},
{
"docid": "ba0051fdc72efa78a7104587042cea64",
"text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.",
"title": ""
},
{
"docid": "8df0970ccf314018874ed3f877ec607e",
"text": "In graph-based simultaneous localization and mapping, the pose graph grows over time as the robot gathers information about the environment. An ever growing pose graph, however, prevents long-term mapping with mobile robots. In this paper, we address the problem of efficient information-theoretic compression of pose graphs. Our approach estimates the mutual information between the laser measurements and the map to discard the measurements that are expected to provide only a small amount of information. Our method subsequently marginalizes out the nodes from the pose graph that correspond to the discarded laser measurements. To maintain a sparse pose graph that allows for efficient map optimization, our approach applies an approximate marginalization technique that is based on Chow-Liu trees. Our contributions allow the robot to effectively restrict the size of the pose graph.Alternatively, the robot is able to maintain a pose graph that does not grow unless the robot explores previously unobserved parts of the environment. Real-world experiments demonstrate that our approach to pose graph compression is well suited for long-term mobile robot mapping.",
"title": ""
},
{
"docid": "d4f953596e49393a4ca65e202eab725c",
"text": "This work integrates deep learning and symbolic programming paradigms into a unified method for deploying applications to a neuromorphic system. The approach removes the need for coordination among disjoint co-processors by embedding both types entirely on a neuromorphic processor. This integration provides a flexible approach for using each technique where it performs best. A single neuromorphic solution can seamlessly deploy neural networks for classifying sensor-driven noisy data obtained from the environment alongside programmed symbolic logic to processes the input from the networks. We present a concrete implementation of the proposed framework using the TrueNorth neuromorphic processor to play blackjack using a pre-programmed optimal strategy algorithm combined with a neural network trained to classify card images as input. Future extensions of this approach will develop a symbolic neuromorphic compiler for automatically creating networks from a symbolic programming language.",
"title": ""
},
{
"docid": "a551b1034e5378a2d6437a8e298490aa",
"text": "The role of increasingly powerful computers in the modeling and simulation domain has resulted in great advancements in the fields of wireless communications, medicine, and space technology to name a few. In The authors of this book start from the fundamental equations that govern low-frequency electromagnetic phenomenon and go through each stage of solving such problems by striking a balance between mathematical rigor and actual implementation in code. The use of MATLAB makes the advanced concepts discussed in the book immediately testable through experiments. The book pays close attention to various applications in an electrical and biological system that are of immediate relevance in today’s world. The use of state-of-the-art human phantom meshes, especially from the Visible Human Project (VHP) of the U.S. National Library of Medicine, makes this text singular in its field. The text is systematic and very well-organized in presenting the various topics on low-frequency electromagnetic. It also should be known that the first part of this text presents the mathematical theory behind low-frequency electromagnetic modeling and follows it with the topic of meshing. The text starts with the basics of meshing and builds it up in an easy-to-read manner with plenty of illustrations.",
"title": ""
},
{
"docid": "b1d00c44127956ab703204490de0acd7",
"text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.",
"title": ""
},
{
"docid": "057a521ce1b852591a44417e788e4541",
"text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.",
"title": ""
},
{
"docid": "14276adf4f5b3538f95cfd10902825ef",
"text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.",
"title": ""
},
{
"docid": "544a5a95a169b9ac47960780ac09de80",
"text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗shihchie@ualberta.ca †mmueller@ualberta.ca",
"title": ""
},
{
"docid": "34312aa89ab4857fec9e640e652db766",
"text": "The 3 major self-evaluation motives were compared: self-assessment (people pursue accurate selfknowledge), self-enhancement (people pursue favorable self-knowledge), and self-verification (people pursue highly certain self-knowledge). Ss considered the possession of personality traits that were either positive or negative and either central or peripheral by asking themselves questions that varied in diagnosticity (the extent to which the questions could discriminate between a trait and its alternative) and in confirmation value (the extent to which the questions confirmed possession of a trait). Ss selected higher diagnosticity questions when evaluating themselves on central positive rather than central negative traits and confirmed possession of their central positive rather than central negative traits. The self-enhancement motive emerged as the most powerful determinant of the self-evaluation process, followed by the self-verification motive.",
"title": ""
},
{
"docid": "d4f575851c5912cdac01efac514e1d56",
"text": "On line analytical processing (OLAP) is an essential element of decision-support systems. OLAP tools provide insights and understanding needed for improved decision making. However, the answers to OLAP queries can be biased and lead to perplexing and incorrect insights. In this paper, we propose, a system to detect, explain, and to resolve bias in decision-support queries. We give a simple definition of a biased query, which performs a set of independence tests on the data to detect bias. We propose a novel technique that gives explanations for bias, thus assisting an analyst in understanding what goes on. Additionally, we develop an automated method for rewriting a biased query into an unbiased query, which shows what the analyst intended to examine. In a thorough evaluation on several real datasets we show both the quality and the performance of our techniques, including the completely automatic discovery of the revolutionary insights from a famous 1973 discrimination case.",
"title": ""
}
] |
scidocsrr
|
796c7347babb6fdc4acd5768ac9adbef
|
PixColor: Pixel Recursive Colorization
|
[
{
"docid": "1dac1fc798794517d8db162a9ac80007",
"text": "We describe an automated method for image colorization that learns to colorize from examples. Our method exploits a LEARCH framework to train a quadratic objective function in the chromaticity maps, comparable to a Gaussian random field. The coefficients of the objective function are conditioned on image features, using a random forest. The objective function admits correlations on long spatial scales, and can control spatial error in the colorization of the image. Images are then colorized by minimizing this objective function. We demonstrate that our method strongly outperforms a natural baseline on large-scale experiments with images of real scenes using a demanding loss function. We demonstrate that learning a model that is conditioned on scene produces improved results. We show how to incorporate a desired color histogram into the objective function, and that doing so can lead to further improvements in results.",
"title": ""
},
{
"docid": "6f22283e5142035d6f6f9d5e06ab1cd2",
"text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"title": ""
},
{
"docid": "261bbc725df289ee7ccee0ac15defdc1",
"text": "We present a novel color-by-example technique which combines image segmentation, patch-based sampling and probabilistic reasoning. This method is able to automate colorization when new color information is applied on the already designed black-and-white cartoon. Our technique is especially suitable for cartoons digitized from classical celluloid films, which were originally produced by a paper or cel based method. In this case, the background is usually a static image and only the dynamic foreground needs to be colored frame-by-frame. We also assume that objects in the foreground layer consist of several well visible outlines which will emphasize the shape of homogeneous regions.",
"title": ""
}
] |
[
{
"docid": "5528b738695f6ff0ac17f07178a7e602",
"text": "Multiple genetic pathways act in response to developmental cues and environmental signals to promote the floral transition, by regulating several floral pathway integrators. These include FLOWERING LOCUS T (FT) and SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1). We show that the flowering repressor SHORT VEGETATIVE PHASE (SVP) is controlled by the autonomous, thermosensory, and gibberellin pathways, and directly represses SOC1 transcription in the shoot apex and leaf. Moreover, FT expression in the leaf is also modulated by SVP. SVP protein associates with the promoter regions of SOC1 and FT, where another potent repressor FLOWERING LOCUS C (FLC) binds. SVP consistently interacts with FLC in vivo during vegetative growth and their function is mutually dependent. Our findings suggest that SVP is another central regulator of the flowering regulatory network, and that the interaction between SVP and FLC mediated by various flowering genetic pathways governs the integration of flowering signals.",
"title": ""
},
{
"docid": "e226452a288c3067ef8ee613f0b64090",
"text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.",
"title": ""
},
{
"docid": "24c744337d831e541f347bbdf9b6b48a",
"text": "Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot \"Engineer\" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.",
"title": ""
},
{
"docid": "04e4c1b80bcf1a93cafefa73563ea4d3",
"text": "The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.",
"title": ""
},
{
"docid": "693ad5651306e883a7065b5f79f2cc1e",
"text": "This paper presents a general framework for agglomerative hierarchical clustering based on graphs. Different hierarchical agglomerative clustering algorithms can be obtained from this framework, by specifying an inter-cluster similarity measure, a subgraph of the 13-similarity graph, and a cover routine. We also describe two methods obtained from this framework called hierarchical compact algorithm and hierarchical star algorithm. These algorithms have been evaluated using standard document collections. The experimental results show that our methods are faster and obtain smaller hierarchies than traditional hierarchical algorithms while achieving a similar clustering quality",
"title": ""
},
{
"docid": "fdd790d33300c19cb0c340903e503b02",
"text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"title": ""
},
{
"docid": "f82a9c15e88ba24dbf8f5d4678b8dffd",
"text": "Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "8de09be7888299dc5dd30bbeb5578c35",
"text": "Scene text detection is challenging as the input may have different orientations, sizes, font styles, lighting conditions, perspective distortions and languages. This paper addresses the problem by designing a Rotational Region CNN (R2CNN). R2CNN includes a Text Region Proposal Network (Text-RPN) to estimate approximate text regions and a multitask refinement network to get the precise inclined box. Our work has the following features. First, we use a novel multi-task regression method to support arbitrarily-oriented scene text detection. Second, we introduce multiple ROIPoolings to address the scene text detection problem for the first time. Third, we use an inclined Non-Maximum Suppression (NMS) to post-process the detection candidates. Experiments show that our method outperforms the state-of-the-art on standard benchmarks: ICDAR 2013, ICDAR 2015, COCO-Text and MSRA-TD500.",
"title": ""
},
{
"docid": "c9d651c41b789263a74678de82082f1d",
"text": "In this paper we address the problem of robustness of speech recognition systems in noisy environments. The goal is to estimate the parameters of a HMM that is matched to a noisy environment, given a HMM trained with clean speech and knowledge of the acoustical environment. We propose a method based on truncated vector Taylor series that approximates the performance of a system trained with that corrupted speech. We also provide insight on the approximations used in the model of the environment and compare them with the lognormal approximation in PMC.",
"title": ""
},
{
"docid": "64efd590a51fc3cab97c9b4b17ba9b40",
"text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.",
"title": ""
},
{
"docid": "bd3620816c83fae9b4a5c871927f2b73",
"text": "Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy. Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.",
"title": ""
},
{
"docid": "be1c50de2963341423960ba0f59fbc1f",
"text": "Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with some existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that does not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no taskspecific regularization.",
"title": ""
},
{
"docid": "43f9cd44dee709339fe5b11eb73b15b6",
"text": "Mutual interference of radar systems has been identified as one of the major challenges for future automotive radar systems. In this work the interference of frequency (FMCW) and phase modulated continuous wave (PMCW) systems is investigated by means of simulations. All twofold combinations of the aforementioned systems are considered. The interference scenario follows a typical use-case from the well-known MOre Safety for All by Radar Interference Mitigation (MOSARIM) study. The investigated radar systems operate with similar system parameters to guarantee a certain comparability, but with different waveform durations, and chirps with different slopes and different phase code sequences, respectively. Since the effects in perfect synchrony are well understood, we focus on the cases where both systems exhibit a certain asynchrony. It is shown that the energy received from interferers can cluster in certain Doppler bins in the range-Doppler plane when systems exhibit a slight asynchrony.",
"title": ""
},
{
"docid": "10cc52c08da8118a220e436bc37e8beb",
"text": "The most common approach in text mining classification tasks is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. Unlike the common approach, we present a method that uses only character p-grams (also known as n-grams) as features for the Arabic Dialect Identification (ADI) Closed Shared Task of the DSL 2016 Challenge. The proposed approach combines several string kernels using multiple kernel learning. In the learning stage, we try both Kernel Discriminant Analysis (KDA) and Kernel Ridge Regression (KRR), and we choose KDA as it gives better results in a 10-fold cross-validation carried out on the training set. Our approach is shallow and simple, but the empirical results obtained in the ADI Shared Task prove that it achieves very good results. Indeed, we ranked on the second place with an accuracy of 50.91% and a weighted F1 score of 51.31%. We also present improved results in this paper, which we obtained after the competition ended. Simply by adding more regularization into our model to make it more suitable for test data that comes from a different distribution than training data, we obtain an accuracy of 51.82% and a weighted F1 score of 52.18%. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral, as it does not require any NLP tools.",
"title": ""
},
{
"docid": "c32a719ac619e7a48adf12fd6a534e7c",
"text": "Using smart devices and apps in clinical trials has great potential: this versatile technology is ubiquitously available, broadly accepted, user friendly and it offers integrated sensors for primary data acquisition and data sending features to allow for a hassle free communication with the study sites. This new approach promises to increase efficiency and to lower costs. This article deals with the ethical and legal demands of using this technology in clinical trials with respect to regulation, informed consent, data protection and liability.",
"title": ""
},
{
"docid": "727a53dad95300ee9749c13858796077",
"text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.",
"title": ""
},
{
"docid": "73fac78407b0081885dfa0168d7cbac0",
"text": "This article reviews controlled research on treatments for childhood externalizing behavior disorders. The review is organized around 2 subsets of such disorders: disruptive behavior disorders (i.e., conduct disorder, oppositional defiant disorder) and attention-deficit/hyperactivity disorder (ADHD). The review was based on a literature review of nonresidential treatments for youths ages 6-12. The pool of studies for this age group was limited, but results suggest positive outcomes for a variety of interventions (particularly parent training and community-based interventions for disruptive behavior disorders and medication for ADHD). The review also highlights the need for additional research examining effectiveness of treatments for this age range and strategies to enhance the implementation of effective practices.",
"title": ""
},
{
"docid": "b6b5afb72393e89c211bac283e39d8a3",
"text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.",
"title": ""
},
{
"docid": "66fce3b6c516a4fa4281d19d6055b338",
"text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.",
"title": ""
}
] |
scidocsrr
|
3cbad897852a4f69f4b5b1cb25a797df
|
Using Neo4j graph database in social network analysis
|
[
{
"docid": "b9f7c3cbf856ff9a64d7286c883e2640",
"text": "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.",
"title": ""
}
] |
[
{
"docid": "9d5d667c6d621bd90a688c993065f5df",
"text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.",
"title": ""
},
{
"docid": "6f4e5448f956017c39c1727e0eb5de7b",
"text": "Recently, community search over graphs has attracted significant attention and many algorithms have been developed for finding dense subgraphs from large graphs that contain given query nodes. In applications such as analysis of protein protein interaction (PPI) networks, citation graphs, and collaboration networks, nodes tend to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this paper, we study the problem of attribute-driven community search, that is, given an undirected graph G where nodes are associated with attributes, and an input query Q consisting of nodes Vq and attributes Wq , find the communities containing Vq , in which most community members are densely inter-connected and have similar attributes. We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs containing Vq, that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop an efficient greedy algorithmic framework, which finds a maximal k-truss containing Vq, and then iteratively removes the nodes with the least popular attributes and shrinks the graph so as to satisfy community constraints. We also build an elegant index to maintain the known k-truss structure and attribute information, and propose efficient query processing algorithms. Extensive experiments on large real-world networks with ground-truth communities shows the efficiency and effectiveness of our proposed methods.",
"title": ""
},
{
"docid": "b9e6d6d2625a713e8fa7491bc1b24223",
"text": "Percutaneous radiofrequency ablation (RFA) is becoming a standard minimally invasive clinical procedure for the treatment of liver tumors. However, planning the applicator placement such that the malignant tissue is completely destroyed, is a demanding task that requires considerable experience. In this work, we present a fast GPU-based real-time approximation of the ablation zone incorporating the cooling effect of liver vessels. Weighted distance fields of varying RF applicator types are derived from complex numerical simulations to allow a fast estimation of the ablation zone. Furthermore, the heat-sink effect of the cooling blood flow close to the applicator's electrode is estimated by means of a preprocessed thermal equilibrium representation of the liver parenchyma and blood vessels. Utilizing the graphics card, the weighted distance field incorporating the cooling blood flow is calculated using a modular shader framework, which facilitates the real-time visualization of the ablation zone in projected slice views and in volume rendering. The proposed methods are integrated in our software assistant prototype for planning RFA therapy. The software allows the physician to interactively place virtual RF applicator models. The real-time visualization of the corresponding approximated ablation zone facilitates interactive evaluation of the tumor coverage in order to optimize the applicator's placement such that all cancer cells are destroyed by the ablation.",
"title": ""
},
{
"docid": "874876e2ed9e4a2ba044cf62d408da55",
"text": "It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution.\n The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring's true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.",
"title": ""
},
{
"docid": "c8bfa845f5eaaeeab5bcf7bdc601bfb5",
"text": "Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.",
"title": ""
},
{
"docid": "d3783bcc47ed84da2c54f5f536450a0c",
"text": "In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nyström Online Gradient Descent (NOGD) algorithm that applies the Nyström method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches.",
"title": ""
},
{
"docid": "61f5ce7063a35192c7d736a648561e3e",
"text": "BoF statistic-based local space-time features action representation is very popular for human action recognition due to its simplicity. However, the problem of large quantization error and weak semantic representation decrease traditional BoF model’s discriminant ability when applied to human action recognition in realistic scenes. To deal with the problems, we investigate the generalization ability of BoF framework for action representation as well as more effective feature encoding about high-level semantics. Towards this end, we present two-layer hierarchical codebook learning framework for human action classification in realistic scenes. In the first-layer action modelling, superpixel GMM model is developed to filter out noise features in STIP extraction resulted from cluttered background, and class-specific learning strategy is employed on the refined STIP feature space to construct compact and descriptive in-class action codebooks. In the second-layer of action representation, LDA-Km learning algorithm is proposed for feature dimensionality reduction and for acquiring more discriminative inter-class action codebook for classification. We take advantage of hierarchical framework’s representational power and the efficiency of BoF model to boost recognition performance in realistic scenes. In experiments, the performance of our proposed method is evaluated on four benchmark datasets: KTH, YouTube (UCF11), UCF Sports and Hollywood2. Experimental results show that the proposed approach achieves improved recognition accuracy than the baseline method. Comparisons with state-of-the-art works demonstrates the competitive ability both in recognition performance and time complexity.",
"title": ""
},
{
"docid": "b0903440893a25a91c575fd96b5524fa",
"text": "With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.",
"title": ""
},
{
"docid": "8cd666c0796c0fe764bc8de0d7a20fa3",
"text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.",
"title": ""
},
{
"docid": "c65833b67b878e65ce617fc37c10394b",
"text": "A high performance texture compression technique is introduced, which exploits the DXT5 format available on today's graphics cards. The compression technique provides a very good middle ground between DXT1 compression and no compression. Using the DXT5 format, textures consume twice the amount of memory of DXT1-compressed textures (a 4:1 compression ratio instead of 8:1). In return, however, the technique provides a significant gain in quality, and for most images, there is almost no noticeable loss in quality. In particular there is a consistent gain in RGB-PSNR of 6 dB or more for the Kodak Lossless True Color Image Suite. Furthermore, the technique allows for both real-time texture decompression during rasterization on current graphics cards, and high quality realtime compression on the CPU and GPU.",
"title": ""
},
{
"docid": "eee0bc6ee06dce38efbc89659771f720",
"text": "In a data center, an IO from an application to distributed storage traverses not only the network, but also several software stages with diverse functionality. This set of ordered stages is known as the storage or IO stack. Stages include caches, hypervisors, IO schedulers, file systems, and device drivers. Indeed, in a typical data center, the number of these stages is often larger than the number of network hops to the destination. Yet, while packet routing is fundamental to networks, no notion of IO routing exists on the storage stack. The path of an IO to an endpoint is predetermined and hard-coded. This forces IO with different needs (e.g., requiring different caching or replica selection) to flow through a one-size-fits-all IO stack structure, resulting in an ossified IO stack. This paper proposes sRoute, an architecture that provides a routing abstraction for the storage stack. sRoute comprises a centralized control plane and “sSwitches” on the data plane. The control plane sets the forwarding rules in each sSwitch to route IO requests at runtime based on application-specific policies. A key strength of our architecture is that it works with unmodified applications and VMs. This paper shows significant benefits of customized IO routing to data center tenants (e.g., a factor of ten for tail IO latency, more than 60% better throughput for a customized replication protocol and a factor of two in throughput for customized caching).",
"title": ""
},
{
"docid": "98b3f17de080aed8bce62e1c00f66605",
"text": "While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions that are significantly less biased and better match the global uni-, bi- and tri-gram distributions of the human captions.",
"title": ""
},
{
"docid": "c945ef3a4e223a70212413b4948fcbc0",
"text": "Text generation is a fundamental building block in natural language processing tasks. Existing sequential models performs autoregression directly over the text sequence and have difficulty generating long sentences of complex structures. This paper advocates a simple approach that treats sentence generation as a tree-generation task. By explicitly modelling syntactic structures in a constituent syntactic tree and performing topdown, breadth-first tree generation, our model fixes dependencies appropriately and performs implicit global planning. This is in contrast to transition-based depth-first generation process, which has difficulty dealing with incomplete texts when parsing and also does not incorporate future contexts in planning. Our preliminary results on two generation tasks and one parsing task demonstrate that this is an effective strategy.",
"title": ""
},
{
"docid": "fb5f6eeff54e54034970d6bcaaacb6ec",
"text": "Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which Switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks.",
"title": ""
},
{
"docid": "5e946f2a15b5d9c663d85cd12bc3d9fc",
"text": "Individual differences in young children's understanding of others' feelings and in their ability to explain human action in terms of beliefs, and the earlier correlates of these differences, were studied with 50 children observed at home with mother and sibling at 33 months, then tested at 40 months on affective-labeling, perspective-taking, and false-belief tasks. Individual differences in social understanding were marked; a third of the children offered explanations of actions in terms of false belief, though few predicted actions on the basis of beliefs. These differences were associated with participation in family discourse about feelings and causality 7 months earlier, verbal fluency of mother and child, and cooperative interaction with the sibling. Differences in understanding feelings were also associated with the discourse measures, the quality of mother-sibling interaction, SES, and gender, with girls more successful than boys. The results support the view that discourse about the social world may in part mediate the key conceptual advances reflected in the social cognition tasks; interaction between child and sibling and the relationships between other family members are also implicated in the growth of social understanding.",
"title": ""
},
{
"docid": "295212e614cc361b1a5fdd320d39f68b",
"text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.",
"title": ""
},
{
"docid": "ae0d8d1dec27539502cd7e3030a3fe42",
"text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.",
"title": ""
},
{
"docid": "d18a2130df6de673362fe1c347985974",
"text": "Malignant catarrhal fever (MCF) is a fatal herpesvirus infection of domestic and wild ruminants, with a short and dramatic clinical course characterized primarily by high fever, severe depression, swollen lymph nodes, salivation, diarrhea, dermatitis, neurological disorders, and ocular lesions often leading to blindness. In the present study, fatal clinical cases of sheep associated malignant catarrhal fever (SA-MCF) were identified in cattle in the state of Karnataka. These cases were initially presented with symptoms of diarrhea, respiratory distress, conjunctivitis, and nasal discharges. Laboratory diagnosis confirmed the detection of ovine herpesvirus-2 (OvHV-2) genome in the peripheral blood samples of two ailing animals. The blood samples collected subsequently from sheep of the neighboring areas also showed presence of OvHV-2 genome indicating a nidus of infection in the region. The positive test results were further confirmed by nucleotide sequencing of the OIE approved portion of tegument gene as well as complete ORF8 region of the OvHV-2 genome. Phylogenetic analysis based on the sequence of the latter region indicated close genetic relationship with other OvHV-2 reported elsewhere in the world.",
"title": ""
},
{
"docid": "ec181b897706d101136dcbcef6e84de9",
"text": "Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.",
"title": ""
}
] |
scidocsrr
|
379414bdf0962277987614a37a28a57a
|
Enhancing Root Extractors Using Light Stemmers
|
[
{
"docid": "8c5f09f3c7c5a8bc1b7c26602fd8102a",
"text": "With increasing interest in sentiment analysis research and opinionated web content always on the rise, focus on analysis of text in various domains and different languages is a relevant and important task. This paper explores the problems of sentiment analysis and opinion strength measurement using a rule-based approach tailored to the Arabic language. The approach takes into account language-specific traits that are valuable to syntactically segment a text, and allow for closer analysis of opinion-bearing language queues. By using an adapted sentiment lexicon along with sets of opinion indicators, a rule-based methodology for opinion-phrase extraction is introduced, followed by a method to rate the parsed opinions and offer a measure of opinion strength for the text under analysis. The proposed method, even with a small set of rules, shows potential for a simple and scalable opinion-rating system, which is of particular interest for morphologically-rich languages such as Arabic.",
"title": ""
},
{
"docid": "ed4463ff17bbaf64d45012ae2aaae50b",
"text": "Functional Arabic Morphology is a formulation of the Arabic inflectional system seeking the working interface between morphology and syntax. ElixirFM is its high-level implementation that reuses and extends the Functional Morphology library for Haskell. Inflection and derivation are modeled in terms of paradigms, grammatical categories, lexemes and word classes. The computation of analysis or generation is conceptually distinguished from the general-purpose linguistic model. The lexicon of ElixirFM is designed with respect to abstraction, yet is no more complicated than printed dictionaries. It is derived from the open-source Buckwalter lexicon and is enhanced with information sourcing from the syntactic annotations of the Prague Arabic Dependency Treebank.",
"title": ""
},
{
"docid": "4282e931ced3f8776f6c4cffb5027f61",
"text": "OBJECTIVES\nTo provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design.\n\n\nTARGET AUDIENCE\nThis tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art.\n\n\nSCOPE\nWe describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.",
"title": ""
}
] |
[
{
"docid": "94b6d4d28d708303530394270a3cfe75",
"text": "The search for the legendary, highly erogenous vaginal region, the Gräfenberg spot (G-spot), has produced important data, substantially improving understanding of the complex anatomy and physiology of sexual responses in women. Modern imaging techniques have enabled visualization of dynamic interactions of female genitals during self-sexual stimulation or coitus. Although no single structure consistent with a distinct G-spot has been identified, the vagina is not a passive organ but a highly dynamic structure with an active role in sexual arousal and intercourse. The anatomical relationships and dynamic interactions between the clitoris, urethra, and anterior vaginal wall have led to the concept of a clitourethrovaginal (CUV) complex, defining a variable, multifaceted morphofunctional area that, when properly stimulated during penetration, could induce orgasmic responses. Knowledge of the anatomy and physiology of the CUV complex might help to avoid damage to its neural, muscular, and vascular components during urological and gynaecological surgical procedures.",
"title": ""
},
{
"docid": "b214d983b0f262fa43bb3a885eed7506",
"text": "The principal reason for providing periodontal therapy is to achieve periodontal health and retain the dentition. Patients with a history of periodontitis represent a unique group of individuals who previously succumbed to a bacterial challenge. Therefore, it is important to address the management and survival rate of implants in these patients. Systematic reviews often are cited in this article, because they provide a high level of evidence and facilitate reviewing a vast amount of information in a succinct manner.",
"title": ""
},
{
"docid": "a5428992001b7b4ed8d983d27df64dcf",
"text": "Travel websites and online booking platforms represent today’s major sources for customers when gathering information before a trip. In particular, community-provided customer reviews and ratings of various tourism services represent a valuable source of information for trip planning. With respect to customer ratings, many modern travel and tourism platforms – in contrast to several other e-commerce domains – allow customers to rate objects along multiple dimensions and thus to provide more fine-granular post-trip feedback on the booked accommodation or travel package. In this paper, we first show how this multi-criteria rating information can help to obtain a better understanding of factors driving customer satisfaction for different segments. For this purpose, we performed a Penalty-Reward Contrast analysis on a data set from a major tourism platform, which reveals that customer segments significantly differ in the way the formation of overall satisfaction can be explained. Beyond the pure identification of segment-specific satisfaction factors, we furthermore show how this fine-granular rating information can be exploited to improve the accuracy of rating-based recommender systems. In particular, we propose to utilize userand object-specific factor relevance weights which can be learned through linear regression. An empirical evaluation on datasets from different domains finally shows that our method helps us to predict the customer preferences more accurately and thus to develop better online recommendation services.",
"title": ""
},
{
"docid": "67e599e65a963f54356b78ce436096c2",
"text": "This paper establishes the existence of observable footprints that reveal the causal dispositions of the object categories appearing in collections of images. We achieve this goal in two steps. First, we take a learning approach to observational causal discovery, and build a classifier that achieves state-of-the-art performance on finding the causal direction between pairs of random variables, given samples from their joint distribution. Second, we use our causal direction classifier to effectively distinguish between features of objects and features of their contexts in collections of static images. Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.",
"title": ""
},
{
"docid": "e0400a04d85641f7a658d9c55295997d",
"text": "End-to-end encryption has been heralded by privacy and security researchers as an effective defence against dragnet surveillance, but there is no evidence of widespread end-user uptake. We argue that the non-adoption of end-toend encryption might not be entirely due to usability issues identified by Whitten and Tygar in their seminal paper “Why Johnny Can’t Encrypt”. Our investigation revealed a number of fundamental issues such as incomplete threat models, misaligned incentives, and a general absence of understanding of the email architecture. From our data and related research literature we found evidence of a number of potential explanations for the low uptake of end-to-end encryption. This suggests that merely increasing the availability and usability of encryption functionality in email clients will not automatically encourage increased deployment by email users. We shall have to focus, first, on building comprehensive end-user mental models related to email, and email security. We conclude by suggesting directions for future research.",
"title": ""
},
{
"docid": "a2b052b1ad2fcebe9ee45a0808101e79",
"text": "Mobile context-aware applications experience a constantly changing environment with increased dynamicity. In order to work efficiently, the location of mobile users needs to be predicted and properly exploited by mobile applications. We propose a spatial context model, which deals with the location prediction of mobile users. Such model is used for the classification of the users' trajectories through Machine Learning (ML) algorithms. Predicting spatial context is treated through supervised learning. We evaluate our model in terms of prediction accuracy w.r.t. specific prediction parameters. The proposed model is also compared with other ML algorithms for location prediction. Our findings are very promising for the efficient operation of mobile context-aware applications.",
"title": ""
},
{
"docid": "6bbee7b306d191b698e9ac035a7b87fa",
"text": "The task of multi-image cued story generation, such as visual storytelling dataset (VIST) challenge, is to compose multiple coherent sentences from a given sequence of images. The main difficulty is how to generate imagespecific sentences within the context of overall images. Here we propose a deep learning network model, GLAC Net, that generates visual stories by combining global-local (glocal) attention and context cascading mechanisms. The model incorporates two levels of attention, i.e., overall encoding level and image feature level, to construct image-dependent sentences. While standard attention configuration needs a large number of parameters, the GLAC Net implements them in a very simple way via hard connections from the outputs of encoders or image features onto the sentence generators. The coherency of the generated story is further improved by conveying (cascading) the information of the previous sentence to the next sentence serially. We evaluate the performance of the GLAC Net on the visual storytelling dataset (VIST) and achieve very competitive results compared to the state-of-the-art techniques. Our code and pre-trained models are available here1.",
"title": ""
},
{
"docid": "e234686126b22695d8295f79083506a7",
"text": "In computer vision the most difficult task is to recognize the handwritten digit. Since the last decade the handwritten digit recognition is gaining more and more fame because of its potential range of applications like bank cheque analysis, recognizing postal addresses on postal cards, etc. Handwritten digit recognition plays a very vital role in day to day life, like in a form of recording of information and style of communication even with the addition of new emerging techniques. The performance of Handwritten digit recognition system is highly depend upon two things: First it depends on feature extraction techniques which is used to increase the performance of the system and improve the recognition rate and the second is the neural network approach which takes lots of training data and automatically infer the rule for matching it with the correct pattern. In this paper we have focused on different methods of handwritten digit recognition that uses both feature extraction techniques and neural network approaches and presented a comparative analysis while discussing pros and cons of each method.",
"title": ""
},
{
"docid": "2cd3130e123a440cd91edafc4a6848fa",
"text": "The aim of this research is to provide a design of an integrated intelligent system for management and controlling traffic lights based on distributed long range Photoelectric Sensors in distances prior to and after the traffic lights. The appropriate distances for sensors are chosen by the traffic management department so that they can monitor cars that are moving towards a specific traffic and then transfer this data to the intelligent software that are installed in the traffic control cabinet, which can control the traffic lights according to the measures that the sensors have read, and applying a proposed algorithm based on the total calculated relative weight of each road. Accordingly, the system will open the traffic that are overcrowded and give it a longer time larger than the given time for other traffics that their measures proved that their traffic density is less. This system can be programmed with very important criteria that enable it to take decisions for intelligent automatic control of traffic lights. Also the proposed system is designed to accept information about any emergency case through an active RFID based technology. Emergency cases such as the passing of presidents, ministries and ambulances vehicles that require immediate opening for the traffic automatically. The system has the ability to open a complete path for such emergency cases from the next traffic until reaching the target destination. (end of the path). As a result the system will guarantee the fluency of traffic for such emergency cases or for the main vital streets and paths that require the fluent traffic all the time, without affecting the fluency of traffic generally at normal streets according to the time of the day and the traffic density. Also the proposed system can be tuned to run automatically without any human intervention or can be tuned to allow human intervention at certain circumstances.",
"title": ""
},
{
"docid": "88df269bd6ddf47592dd23632162d386",
"text": "Online multiplayer games are often rich sources of complex social interactions. In this paper, we focus on the unique player experiences (PX) created by Multiplayer Online Battle Arena (MOBA) games. We examine key phases of players' engagement with the genre and investigate why players start, stay, and stop playing MOBAs. Our study identifies how team interactions during play with friends or strangers affect PX during these phases. Results indicate the ability to play with friends is salient when beginning play and during periods of engagement. Teams that include friends support a wider range of play possibilities - socially and competitively -- than teams of strangers. However, social factors appear less relevant to those choosing to stop playing, who do so for a variety of reasons. This study contributes to the field by identifying a strategy to improve the wellbeing of players.",
"title": ""
},
{
"docid": "25ce68e2b2d9e9d8ff741e4e9ad1e378",
"text": "Advances in electronic banking technology have created novel ways of handling daily banking affairs, especially via the online banking channel. The acceptance of online banking services has been rapid in many parts of the world, and in the leading ebanking countries the number of e-banking contracts has exceeded 50 percent. Investigates online banking acceptance in the light of the traditional technology acceptance model (TAM), which is leveraged into the online environment. On the basis of a focus group interview with banking professionals, TAM literature and e-banking studies, we develop a model indicating onlinebanking acceptance among private banking customers in Finland. The model was tested with a survey sample (n 1⁄4 268). The findings of the study indicate that perceived usefulness and information on online banking on the Web site were the main factors influencing online-banking acceptance.",
"title": ""
},
{
"docid": "2438479795a9673c36138212b61c6d88",
"text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.",
"title": ""
},
{
"docid": "95ec0d130862f7a514fd5d47a95f6585",
"text": "With the rising cost of energy and growing environmental concerns, the demand for sustainable building facilities with minimal environmental impact is increasing. The most effective decisions regarding sustainability in a building facility are made in the early design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. In this exploratory research, three building performance analysis software namely EcotectTM, Green Building StudioTM (GBS) and Virtual EnvironmentTM are evaluated to gage their suitability for BIM-based sustainability analysis. First presented in this paper are the main concepts of sustainability and BIM. Then an evaluation of the three abovementioned software is performed with their pros and cons. An analytical weight-based scoring system is used for this purpose. At the end, a conceptual framework is presented to illustrate how construction companies can use BIM for sustainability analysis and evaluate LEED (Leadership in Energy and Environmental Design) rating of a building facility.",
"title": ""
},
{
"docid": "17dc2b08b63a10c70aa1fcfcf72071df",
"text": "In this paper, we introduce Adversarial-and-attention Network (A3Net) for Machine Reading Comprehension. This model extends existing approaches from two perspectives. First, adversarial training is applied to several target variables within the model, rather than only to the inputs or embeddings. We control the norm of adversarial perturbations according to the norm of original target variables, so that we can jointly add perturbations to several target variables during training. As an effective regularization method, adversarial training improves robustness and generalization of our model. Second, we propose a multi-layer attention network utilizing three kinds of high-efficiency attention mechanisms. Multi-layer attention conducts interaction between question and passage within each layer, which contributes to reasonable representation and understanding of the model. Combining these two contributions, we enhance the diversity of dataset and the information extracting ability of the model at the same time. Meanwhile, we construct A3Net for the WebQA dataset. Results show that our model outperforms the state-ofthe-art models (improving Fuzzy Score from 73.50% to 77.0%).",
"title": ""
},
{
"docid": "b3998d818b12e9dc376afea3094ae23f",
"text": "1. Andrew Borthwick and Ralph Grishman. 1999. A maximum entropy approach to named entity recognition. Ph. D. Thesis, Dept. of Computer Science, New York University. 2. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645–6649. 3. Xuezhe Ma and Eduard Hovy. 2016. End-to-end se-quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). The Ohio State University",
"title": ""
},
{
"docid": "a5c9de4127df50d495c7372b363691cf",
"text": "This book is an accompaniment to the computer software package mathStatica (which runs as an add-on to Mathematica). The book comes with two CD-ROMS: mathStatica, and a 30-day trial version of Mathematica 4.1. The mathStatica CD-ROM includes an applications pack for doing mathematical statistics, custom Mathematica palettes and an electronic version of the book that is identical to the printed text, but can be used interactively to generate animations of some of the book's figures (e.g. as a parameter is varied). (I found this last feature particularly valuable.) MathStatica has statistical operators for determining expectations (and hence characteristic functions, for example) and probabilities, for finding the distributions of transformations of random variables and generally for dealing with the kinds of problems and questions that arise in mathematical statistics. Applications include estimation, curve-fitting, asymptotics, decision theory and moment conversion formulae (e.g. central to cumulant). To give an idea of the coverage of the book: after an introductory chapter, there are three chapters on random variables, then chapters on systems of distributions (e.g. Pearson), multivariate distributions, moments, asymptotic theory, decision theory and then three chapters on estimation. There is an appendix, which deals with technical Mathematica details. What distinguishes mathStatica from statistical packages such as S-PLUS, R, SPSS and SAS is its ability to deal with the algebraic/symbolic problems that are the main concern of mathematical statistics. This is, of course, because it is based on Mathematica, and this is also the reason that it has a note–book interface (which enables one to incorporate text, equations and pictures into a single line), and why arbitrary-precision calculations can be performed. According to the authors, 'this book can be used as a course text in mathematical statistics or as an accompaniment to a more traditional text'. Assumed knowledge includes preliminary courses in statistics, probability and calculus. The emphasis is on problem solving. The material is supposedly pitched at the same level as Hogg and Craig (1995). However some topics are treated in much more depth than in Hogg and Craig (characteristic functions for instance, which rate less than one page in Hogg and Craig). Also, the coverage is far broader than that of Hogg and Craig; additional topics include for instance stable distributions, cumulants, Pearson families, Gram-Charlier expansions and copulae. Hogg and Craig can be used as a textbook for a third-year course in mathematical statistics in some Australian universities , whereas there is …",
"title": ""
},
{
"docid": "35a063ab339f32326547cc54bee334be",
"text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.",
"title": ""
},
{
"docid": "b12b500f7c6ac3166eb4fbdd789196ea",
"text": "Theory of Mind (ToM) is the ability to attribute thoughts, intentions and beliefs to others. This involves component processes, including cognitive perspective taking (cognitive ToM) and understanding emotions (affective ToM). This study assessed the distinction and overlap of neural processes involved in these respective components, and also investigated their development between adolescence and adulthood. While data suggest that ToM develops between adolescence and adulthood, these populations have not been compared on cognitive and affective ToM domains. Using fMRI with 15 adolescent (aged 11-16 years) and 15 adult (aged 24-40 years) males, we assessed neural responses during cartoon vignettes requiring cognitive ToM, affective ToM or physical causality comprehension (control). An additional aim was to explore relationships between fMRI data and self-reported empathy. Both cognitive and affective ToM conditions were associated with neural responses in the classic ToM network across both groups, although only affective ToM recruited medial/ventromedial PFC (mPFC/vmPFC). Adolescents additionally activated vmPFC more than did adults during affective ToM. The specificity of the mPFC/vmPFC response during affective ToM supports evidence from lesion studies suggesting that vmPFC may integrate affective information during ToM. Furthermore, the differential neural response in vmPFC between adult and adolescent groups indicates developmental changes in affective ToM processing.",
"title": ""
},
{
"docid": "84006a4b4c402b4b23dad09eb00829f8",
"text": "Deployed software systems are typically composed of many pieces, not all of which may have been created by the main development team. Often, the provenance of included components -- such as external libraries or cloned source code -- is not clearly stated, and this uncertainty can introduce technical and ethical concerns that make it difficult for system owners and other stakeholders to manage their software assets. In this work, we motivate the need for the recovery of the provenance of software entities by a broad set of techniques that could include signature matching, source code fact extraction, software clone detection, call flow graph matching, string matching, historical analyses, and other techniques. We liken our provenance goals to that of Bertillonage, a simple and approximate forensic analysis technique based on bio-metrics that was developed in 19th century France before the advent of fingerprints. As an example, we have developed a fast, simple, and approximate technique called anchored signature matching for identifying library version information within a given Java application. This technique involves a type of structured signature matching performed against a database of candidates drawn from the Maven2 repository, a 150GB collection of open source Java libraries. An exploratory case study using a proprietary e-commerce Java application illustrates that the approach is both feasible and effective.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.