query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
eee245c69684ad95e9ad3d363fc10173
|
Controlling Information Aggregation for Complex Question Answering
|
[
{
"docid": "ce8f565a80deadb7b35adf93d2afbd4c",
"text": "Graph ranking plays an important role in many applications, such as page ranking on web graphs and entity ranking on social networks. In applications, besides graph structure, rich information on nodes and edges and explicit or implicit human supervision are often available. In contrast, conventional algorithms (e.g., PageRank and HITS) compute ranking scores by only resorting to graph structure information. A natural question arises here, that is, how to effectively and efficiently leverage all the information to more accurately calculate graph ranking scores than the conventional algorithms, assuming that the graph is also very large. Previous work only partially tackled the problem, and the proposed solutions are also not satisfying. This paper addresses the problem and proposes a general framework as well as an efficient algorithm for graph ranking. Specifically, we define a semi-supervised learning framework for ranking of nodes on a very large graph and derive within our proposed framework an efficient algorithm called Semi-Supervised PageRank. In the algorithm, the objective function is defined based upon a Markov random walk on the graph. The transition probability and the reset probability of the Markov model are defined as parametric models based on features on nodes and edges. By minimizing the objective function, subject to a number of constraints derived from supervision information, we simultaneously learn the optimal parameters of the model and the optimal ranking scores of the nodes. Finally, we show that it is possible to make the algorithm efficient to handle a billion-node graph by taking advantage of the sparsity of the graph and implement it in the MapReduce logic. Experiments on real data from a commercial search engine show that the proposed algorithm can outperform previous algorithms on several tasks.",
"title": ""
},
{
"docid": "fa6f272026605bddf1b18c8f8234dba6",
"text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles",
"title": ""
},
{
"docid": "43b2912b6ad9824e3263ff9951daf0c2",
"text": "Monolingual alignment models have been shown to boost the performance of question answering systems by ”bridging the lexical chasm” between questions and answers. The main limitation of these approaches is that they require semistructured training data in the form of question-answer pairs, which is difficult to obtain in specialized domains or lowresource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer pairs from discourse structures. Our approach is driven by two representations of discourse: a shallow sequential representation, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We show that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.",
"title": ""
},
{
"docid": "8f3d86a21b8a19c7d3add744c2e5e202",
"text": "Question answering (QA) systems are easily distracted by irrelevant or redundant words in questions, especially when faced with long or multi-sentence questions in difficult domains. This paper introduces and studies the notion of essential question terms with the goal of improving such QA solvers. We illustrate the importance of essential question terms by showing that humans’ ability to answer questions drops significantly when essential terms are eliminated from questions. We then develop a classifier that reliably (90% mean average precision) identifies and ranks essential terms in questions. Finally, we use the classifier to demonstrate that the notion of question term essentiality allows state-of-the-art QA solvers for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%. We also introduce a new dataset of over 2,200 crowd-sourced essential terms annotated science questions.",
"title": ""
}
] |
[
{
"docid": "ff272c41a811b6e0031d6e90a895f919",
"text": "Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.",
"title": ""
},
{
"docid": "98fb03e0e590551fa9e7c82b827c78ed",
"text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece",
"title": ""
},
{
"docid": "8665711daa00dac270ed0830e43acdde",
"text": "Deep learning-based approaches have been widely used for training controllers for autonomous vehicles due to their powerful ability to approximate nonlinear functions or policies. However, the training process usually requires large labeled data sets and takes a lot of time. In this paper, we analyze the influences of features on the performance of controllers trained using the convolutional neural networks (CNNs), which gives a guideline of feature selection to reduce computation cost. We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features). We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller. The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects. The second framework is trained with the data that has one feature excluded, while all three features are included in the test data. Different driving scenarios are selected to test and analyze the trained controllers using the two experimental frameworks. The experiment results show that (1) the road-related features are indispensable for training the controller, (2) the roadside-related features are useful to improve the generalizability of the controller to scenarios with complicated roadside information, and (3) the sky-related features have limited contribution to train an end-to-end autonomous vehicle controller.",
"title": ""
},
{
"docid": "c70d8ae9aeb8a36d1f68ba0067c74696",
"text": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in knowledge bases, such as text, images, and numerical values. In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data. Further, using these learned embedings and different neural decoders, we introduce a novel multimodal imputation model to generate missing multimodal values, like text and images, from information in the knowledge base. We enrich existing relational datasets to create two novel benchmarks that contain additional information such as textual descriptions and images of the original entities. We demonstrate that our models utilize this additional information effectively to provide more accurate link prediction, achieving state-of-the-art results with a considerable gap of 5-7% over existing methods. Further, we evaluate the quality of our generated multimodal values via a user study. We have release the datasets and the opensource implementation of our models at https: //github.com/pouyapez/mkbe.",
"title": ""
},
{
"docid": "732433b4cc1d9a3fcf10339e53eb3ab8",
"text": "Humans and mammals possess their own feet. Using the mobility of their feet, they are able to walk in various environments such as plain land, desert, swamp, and so on. Previously developed biped robots and four-legged robots did not employ such adaptable foot. In this work, a biomimetic foot mechanism is investigated through analysis of the foot structure of the human-being. This foot mechanism consists of a toe, an ankle, a heel, and springs replacing the foot muscles and tendons. Using five toes and springs, this foot can adapt to various environments. A mathematical modeling for this foot mechanism was performed and its characteristics were observed through numerical simulation.",
"title": ""
},
{
"docid": "29e030bb4d8547d7615b8e3d17ec843d",
"text": "This Paper examines the enforcement of occupational safety and health (OSH) regulations; it validates the state of enforcement of OSH regulations by extracting the salient issues that influence enforcement of OSH regulations in Nigeria. It’s the duty of the Federal Ministry of Labour and Productivity (Inspectorate Division) to enforce the Factories Act of 1990, while the Labour, Safety, Health and Welfare Bill of 2012 empowers the National Council for Occupational Safety and Health of Nigeria to administer the proceeding regulations on its behalf. Sadly enough, the impact of the enforcement authority is ineffective, as the key stakeholders pay less attention to OSH regulations; thus, rendering the OSH scheme dysfunctional and unenforceable, at the same time impeding OSH development. For optimum OSH in Nigeria, maximum enforcement and compliance with the regulations must be in place. This paper, which is based on conceptual analysis, reviews literature gathered through desk literature search. It identified issues to OSH enforcement such as: political influence, bribery and corruption, insecurity, lack of governmental commitment, inadequate legislation inter alia. While recommending ways to improve the enforcement of OSH regulations, it states that self-regulatory style of enforcing OSH regulations should be adopted by organisations. It also recommends that more OSH inspectors be recruited; local government authorities empowered to facilitate the enforcement of OSH regulations. Moreover, the study encourages organisations to champion OSH enforcement, as it is beneficial to them; it concludes that the burden of OSH improvement in Nigeria is on the government, educational authorities, organisations and trade unions.",
"title": ""
},
{
"docid": "05980211337c2a1cd8204acb96012439",
"text": "Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for “deep learning” methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve average Dice scores of 68% ± 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.",
"title": ""
},
{
"docid": "46818c0cd0d3b072d64113cf4b7b7e91",
"text": "We study the problem of distributed multitask learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix has low rank. We consider a setting where each task is handled by a different machine, with samples for the task available locally on the machine, and study communication-efficient methods for exploiting the shared structure.",
"title": ""
},
{
"docid": "45db5ab7ceac7156da713d19d5576598",
"text": "The development of user interface systems has languished with the stability of desktop computing. Future systems, however, that are off-the-desktop, nomadic or physical in nature will involve new devices and new software systems for creating interactive applications. Simple usability testing is not adequate for evaluating complex systems. The problems with evaluating systems work are explored and a set of criteria for evaluating new UI systems work is presented.",
"title": ""
},
{
"docid": "b676952c75749bb69efbd250f4a1ca61",
"text": "A discrete-event simulation model that imitates most on-track events, including car failures, passing manoeuvres and pit stops during a Formula One race, is presented. The model is intended for use by a specific team. It will enable decision-makers to plan and evaluate their race strategy, consequently providing them with a possible competitive advantage. The simulation modelling approach presented in this paper captures the mechanical complexities and physical interactions of a race car with its environment through a time-based approach. Model verification and validation are demonstrated using three races from the 2005 season. The application of the model is illustrated by evaluating the race strategies employed by a specific team during these three races. Journal of the Operational Research Society (2009) 60, 952–961. doi:10.1057/palgrave.jors.2602626 Published online 9 July 2008",
"title": ""
},
{
"docid": "78e8f84224549b75584c59591a8febef",
"text": "Our goal is to design architectures that retain the groundbreaking performance of Convolutional Neural Networks (CNNs) for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. (e) We further provide additional results for the problem of facial part segmentation. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks.",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "ef065f2471d9b940e9167ff8daf1c735",
"text": "Fano’s inequality lower bounds the probability of transmission error through a communication channel. Applied to classification problems, it provides a lower bound on the Bayes error rate and motivates the widely used Infomax principle. In modern machine learning, we are often interested in more than just the error rate. In medical diagnosis, different errors incur different cost; hence, the overall risk is cost-sensitive. Two other popular criteria are balanced error rate (BER) and F-score. In this work, we focus on the two-class problem and use a general definition of conditional entropy (including Shannon’s as a special case) to derive upper/lower bounds on the optimal F-score, BER and cost-sensitive risk, extending Fano’s result. As a consequence, we show that Infomax is not suitable for optimizing F-score or cost-sensitive risk, in that it can potentially lead to low F-score and high risk. For cost-sensitive risk, we propose a new conditional entropy formulation which avoids this inconsistency. In addition, we consider the common practice of using a threshold on the posterior probability to tune performance of a classifier. As is widely known, a threshold of 0.5, where the posteriors cross, minimizes error rate—we derive similar optimal thresholds for F-score and BER.",
"title": ""
},
{
"docid": "807da3f22f553b58c45bd4db28506d27",
"text": "a School of Business, Yeungnam University, South Korea, 241-1, Dae-dong, Gyeongsan-si, Gyeongsangbuk-do 712-749, South Korea b Department of Information and Process Management, Bentley University, Adamian Academic Center 242, Waltham, MA 02452, USA c Department of Information Technology, Seidenberg School of Computer Science and Information Systems, Pace University, 163 William Street, New York, NY 10038, USA d Division of International Studies, Hanyang University, Seongdonggu, Seoul, 133-791, South Korea e School of Business, University of Bridgeport, Bridgeport, CT 06604, USA",
"title": ""
},
{
"docid": "5d59b6a3e24d19ed13293d6b7e85af67",
"text": "In this tool demo, we will illustrate our tool---Titan---that supports a new architecture model: design rule spaces (DRSpaces). We will show how Titan can capture both architecture and evolutionary structure and help to bridge the gap between architecture and defect prediction. We will demo how to use our toolset to capture hundreds of buggy files into just a few architecturally related groups, and to reveal architecture issues that contribute to the error-proneness and change-proneness of these groups. Our tool has been used to analyze dozens of large-scale industrial projects, and has demonstrated its ability to provide valuable direction on which parts of the architecture are problematic, and on why, when, and how to refactor. The video demo of Titan can be found at https://art.cs.drexel.edu/~lx52/titan.mp4",
"title": ""
},
{
"docid": "e6f37f7b73c6d511f38b4adb4b7938e0",
"text": "Context: Every software development project uses folders to organize software artifacts. Goal: We would like to understand how folders are used and what ramifications different uses may have. Method: In this paper we study the frequency of folders used by 140k Github projects and use regression analysis to model how folder use is related to project popularity, i.e., the extent of forking. Results: We find that the standard folders, such as document, testing, and examples, are not only among the most frequently used, but their presence in a project is associated with increased chances that a project's code will be forked (i.e., used by others) and an increased number of forks. Conclusions: This preliminary study of folder use suggests opportunities to quantify (and improve) file organization practices based on folder use patterns of large collections of repositories.",
"title": ""
},
{
"docid": "0a2be958c7323d3421304d1613421251",
"text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
},
{
"docid": "4d99090b874776b89092f63f21c8ea93",
"text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.",
"title": ""
}
] |
scidocsrr
|
2ef7d529cc9a6beceab3a58769004d2c
|
Assembly Technologies for Integrated Transmitter/Receiver Optical Sub-Assembly Modules
|
[
{
"docid": "134d85937dc13e4174e2ddb99197f924",
"text": "A compact hybrid-integrated 100 Gb/s (4 lane × 25.78125 Gb/s) transmitter optical sub-assembly (TOSA) has been developed for a 100 Gb/s transceiver for 40-km transmission over a single-mode fiber. The TOSA has a simple configuration in which four electro-absorption modulator-integrated distributed feedback (EADFB) lasers are directly attached to the input waveguide end-face of a silica-based arrayed waveguide grating (AWG) multiplexer without bulk lenses. To achieve a high optical butt coupling efficiency between the EADFB lasers and the AWG multiplexer, we integrated a laterally tapered spot-size converter (SSC) for the EADFB laser and employed a waveguide with a high refractive index difference of 2.0% for the AWG multiplexer. By optimizing the laterally tapered SSC structure, we achieved a butt-coupling loss of less than 3 dB, which is an improvement of around 2 dB compared with a laser without an SSC structure. We also developed an ultracompact AWG multiplexer, which was 6.7 mm × 3.5 mm in size with an insertion loss of less than 1.9 dB. We achieved this by using a Mach-Zehnder interferometer-synchronized configuration to obtain a low loss and wide flat-top transmission filter spectra. The TOSA body size was 19.9 mm (L) × 6.0 mm (W) × 5.8 mm (H). Error-free operation was demonstrated for a 40-km transmission when all the lanes were driven simultaneously with a low EA modulator driving voltage of 1.5 V at an operating temperature of 55 °C.",
"title": ""
}
] |
[
{
"docid": "635438f0937666b5f07de348b30b13c1",
"text": "Management of the horseshoe crab, Limulus polyphemus, is currently surrounded by controversy. The species is considered a multiple-use resource, as it plays an important role as bait in a commercial fishery, as a source of an important biomedical product, as an important food source for multiple species of migratory shorebirds, as well as in several other minor, but important, uses. Concern has arisen that horseshoe crabs may be declining in number. However, traditional management historically data have not been kept for this species. In this review we discuss the general biology, ecology, and life history of the horseshoe crab. We discuss the role the horseshoe crab plays in the commercial fishery, in the biomedical industry, as well as for the shorebirds. We examine the economic impact the horseshoe crab has in the mid-Atlantic region and review the current developments of alternatives to the horseshoe crab resource. We discuss the management of horseshoe crabs by including a description of the Atlantic States Marine Fisheries Commission (ASMFC) and its management process. An account of the history of horseshoe crab management is included, as well as recent and current regulations and restrictions.",
"title": ""
},
{
"docid": "1ab0f5075fc35f07b7e79786f459f7ba",
"text": "In this paper, the impact of the response of a wind farm (WF) on the operation of a nearby grid is investigated during network disturbances. Only modern variable speed wind turbines are treated in this work. The new E.ON Netz fault response code for WF is taken as the base case for the study. The results found in this paper are that the performance of the used Cigre 32-bus test system during disturbances is improved when the WF is complying with the E.ON code compared to the traditional unity power factor operation. Further improvements are found when the slope of the reactive current support line is increased from the E.ON specified value. In addition, a larger converter of a variable speed wind turbine is exploited that is to be used in order to improve the stability of a nearby grid by extending the reactive current support. By doing so, it is shown in this paper that the voltage profile at the point of common coupling (pcc) as well as the transient stability of the grid are improved compared to the original E.ON code, in addition to the improvements already achieved by using the E.ON code in its original form. Finally, regarding the utilization of a larger converter, it is important to point out that the possible reactive power in-feed into the pcc from an offshore WF decreases with increasing cable length during network faults, making it difficult to support the grid with extra reactive power during disturbances.",
"title": ""
},
{
"docid": "ad918df13aaa2e78c92a7626699f1ecc",
"text": "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases imagesequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.",
"title": ""
},
{
"docid": "3b584918e05d5e7c0c34f3ad846285d3",
"text": "Recently, there is increasing interest and research on the interpretability of machine learning models, for example how they transform and internally represent EEG signals in Brain-Computer Interface (BCI) applications. This can help to understand the limits of the model and how it may be improved, in addition to possibly provide insight about the data itself. Schirrmeister et al. (2017) have recently reported promising results for EEG decoding with deep convolutional neural networks (ConvNets) trained in an end-to-end manner and, with a causal visualization approach, showed that they learn to use spectral amplitude changes in the input. In this study, we investigate how ConvNets represent spectral features through the sequence of intermediate stages of the network. We show higher sensitivity to EEG phase features at earlier stages and higher sensitivity to EEG amplitude features at later stages. Intriguingly, we observed a specialization of individual stages of the network to the classical EEG frequency bands alpha, beta, and high gamma. Furthermore, we find first evidence that particularly in the last convolutional layer, the network learns to detect more complex oscillatory patterns beyond spectral phase and amplitude, reminiscent of the representation of complex visual features in later layers of ConvNets in computer vision tasks. Our findings thus provide insights into how ConvNets hierarchically represent spectral EEG features in their intermediate layers and suggest that ConvNets can exploit and might help to better understand the compositional structure of EEG time series.",
"title": ""
},
{
"docid": "b99efb63e8016c7f5ab09e868ae894da",
"text": "The popular bag of words approach for action recognition is based on the classifying quantized local features density. This approach focuses excessively on the local features but discards all information about the interactions among them. Local features themselves may not be discriminative enough, but combined with their contexts, they can be very useful for the recognition of some actions. In this paper, we present a novel representation that captures contextual interactions between interest points, based on the density of all features observed in each interest point's mutliscale spatio-temporal contextual domain. We demonstrate that augmenting local features with our contextual feature significantly improves the recognition performance.",
"title": ""
},
{
"docid": "1600d4662fc5939c5f737756e2d3e823",
"text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.",
"title": ""
},
{
"docid": "0485307bff6f6b84031d2b5d47abb239",
"text": "Cache attacks pose a threat to any code whose execution ow or memory accesses depend on sensitive information. Especially in public clouds, where caches are shared across several tenants, cache attacks remain an unsolved problem. Cache attacks rely on evictions by the spy process, which alter the execution behavior of the victim process. We show that hardware performance events of cryptographic routines reveal the presence of cache attacks. Based on this observation, we propose CacheShield, a tool to protect legacy code by monitoring its execution and detecting the presence of cache attacks, thus providing the opportunity to take preventative measures. CacheShield can be run by users and does not require alteration of the OS or hypervisor, while previously proposed software-based countermeasures require cooperation from the hypervisor. Unlike methods that try to detect malicious processes, our approach is lean, as only a fraction of the system needs to be monitored. It also integrates well into today’s cloud infrastructure, as concerned users can opt to use CacheShield without support from the cloud service provider. Our results show that CacheShield detects cache attacks fast, with high reliability, and with few false positives, even in the presence of strong noise.",
"title": ""
},
{
"docid": "32a3ed78cd8abe70977ef28bede467fd",
"text": "Plagiarism in the sense of “theft of intellectual property” has been around for as long as humans have produced work of art and research. However, easy access to the Web, large databases, and telecommunication in general, has turned plagiarism into a serious problem for publishers, researchers and educational institutions. In this paper, we concentrate on textual plagiarism (as opposed to plagiarism in music, paintings, pictures, maps, technical drawings, etc.). We first discuss the complex general setting, then report on some results of plagiarism detection software and finally draw attention to the fact that any serious investigation in plagiarism turns up rather unexpected side-effects. We believe that this paper is of value to all researchers, educators and students and should be considered as seminal work that hopefully will encourage many still deeper investigations.",
"title": ""
},
{
"docid": "fc875b50a03dcae5cbde23fa7f9b16bf",
"text": "Although considerable research has shown the importance of social connection for physical health, little is known about the higher-level neurocognitive processes that link experiences of social connection or disconnection with health-relevant physiological responses. Here we review the key physiological systems implicated in the link between social ties and health and the neural mechanisms that may translate social experiences into downstream health-relevant physiological responses. Specifically, we suggest that threats to social connection may tap into the same neural and physiological 'alarm system' that responds to other critical survival threats, such as the threat or experience of physical harm. Similarly, experiences of social connection may tap into basic reward-related mechanisms that have inhibitory relationships with threat-related responding. Indeed, the neurocognitive correlates of social disconnection and connection may be important mediators for understanding the relationships between social ties and health.",
"title": ""
},
{
"docid": "2f04cd1b83b2ec17c9930515e8b36b95",
"text": "Traditionally, visualization design assumes that the e↵ectiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization requires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowledge or assisting in decision-making). Focusing on the presentation of data per se can result in situations where these higher goals are ignored. This is especially the case for situations where cognitive or perceptual biases make the presentation of “just” the data as misleading as willful distortion. We argue that we need to de-sanctify data, and occasionally promote designs which distort or obscure data in service of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.",
"title": ""
},
{
"docid": "3d81e3ed2c0614544887183ac7c049ce",
"text": "Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003) into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.” Disciplines Agriculture | Bioresource and Agricultural Engineering | Computer Sciences | Statistics and Probability Comments This article is from Data Science Journal. 13, pp.138–157. DOI: http://doi.org/10.2481/dsj.14-041. Posted with permission. Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/abe_eng_pubs/771 A BRIEF REVIEW ON LEADING BIG DATA MODELS Sugam Sharma 1* , Udoyara S Tim 2 , Johnny Wong 3 , Shashi Gadia 3 , Subhash Sharma 4 1 Center for Survey Statistics and Methodology, Iowa State University, Ames, IA, 50010, USA *Email: sugamsha@iastate.edu 2 Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50010, USA 3 Department of Computer Science, Iowa State University, Ames, IA, 50010, USA 4 Electronics & Computer Discipline, DPT, Indian Institute of Technology, Roorkee, 247001, INDIA",
"title": ""
},
{
"docid": "57cdf599b147bab983ffca8ddd0aa62b",
"text": "Usernames are ubiquitously used for identification and authentication purposes on web services and the Internet at large, ranging from the local-part of email addresses to identifiers in social networks. Usernames are generally alphanumerical strings chosen by the users and, by design, are unique within the scope of a single organization or web service. In this paper we investigate the feasibility of using usernames to trace or link multiple profiles across services that belong to the same individual. The intuition is that the probability that two usernames refer to the same physical person strongly depends on the “entropy” of the username string itself. Our experiments, based on usernames gathered from real web services, show that a significant portion of the users’ profiles can be linked using their usernames. In collecting the data needed for our study, we also show that users tend to choose a small number of related usernames and use them across many services. To the best of our knowledge, this is the first time that usernames are considered as a source of information when profiling users on the Internet.",
"title": ""
},
{
"docid": "750d095a00bfce93765b10033b00b9fd",
"text": "This paper studies a new renewable energy investment model through crowdfunding, which is motivated by emerging community solar farms. In this paper we develop a sequential game theory model to capture the interactions among crowdfunders, the solar farm owner, and an electricity company who purchases renewable energy generated by the solar farm in a multi-period framework. By characterizing a unique subgame-perfect equilibrium, and comparing it with a benchmark model without crowdfunding, we find that under crowdfunding although the farm owner reduces its investment level, the overall green energy investment level is increased due to the contribution of crowdfunders. We also find that crowdfunding can increase the penetration of green energy in consumption and thus reduce the energy procurement cost of the electricity company. Finally, the numerical results based on real data indicates crowdfunding is a simple but effective way to boost green generation.",
"title": ""
},
{
"docid": "07425e53be0f6314d52e3b4de4d1b601",
"text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.",
"title": ""
},
{
"docid": "fdcf6e60ad11b10fba077a62f7f1812d",
"text": "Delivering web software as a service has grown into a powerful paradigm for deploying a wide range of Internetscale applications. However for end-users, accessing software as a service is fundamentally at odds with free software, because of the associated cost of maintaining server infrastructure. Users end up paying for the service in one way or another, often indirectly through ads or the sale of their private data. In this paper, we aim to enable a new generation of portable and free web apps by proposing an alternative model to the existing client-server web architecture. freedom.js is a platform for developing and deploying rich multi-user web apps, where application logic is pushed out from the cloud and run entirely on client-side browsers. By shifting the responsibility of where code runs, we can explore a novel incentive structure where users power applications with their own resources, gain the ability to control application behavior and manage privacy of data. For developers, we lower the barrier of writing popular web apps by removing much of the deployment cost and making applications simpler to write. We provide a set of novel abstractions that allow developers to automatically scale their application with low complexity and overhead. freedom.js apps are inherently sandboxed, multi-threaded, and composed of reusable modules. We demonstrate the flexibility of freedom.js through a number of applications that we have built on top of the platform, including a messaging application, a social file synchronization tool, and a peer-to-peer (P2P) content delivery network (CDN). Our experience shows that we can implement a P2P-CDN with 50% fewer lines of application-specific code in the freedom.js framework when compared to a standalone version. In turn, we incur an additional startup latency of 50-60ms (about 6% of the page load time) with the freedom.js version, without any noticeable impact on system throughput.",
"title": ""
},
{
"docid": "30bf6e5874bc893f8762dc3b59af552b",
"text": "Video-based facial expression recognition has received significant attention in recent years due to its widespread applications. One key issue for video-based facial expression analysis in practice is how to extract dynamic features. In this paper, a novel approach is presented using histogram sequence of local Gabor binary patterns from three orthogonal planes (LGBP-TOP). In this approach, every facial expression sequence is firstly convolved with the multi-scale and multi-orientation Gabor filters to extract the Gabor Magnitude Sequences (GMSs). Then, we use local binary patterns from three orthogonal planes (LBP-TOP) on each GMS to further enhance the feature extraction. Finally, the facial expression sequence is modeled as a histogram sequence by concatenating the histogram pieces of all the local regions of all the LGBP-TOP maps. For recognition, Support Vector Machine (SVM) is exploited. Our experimental results on the extended Cohn-Kanade database (CK+) demonstrate that the proposed method has achieved the best results compared to other methods in recent years.",
"title": ""
},
{
"docid": "8074d30cb422922bc134d07547932685",
"text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.",
"title": ""
},
{
"docid": "1ca4294857fcdd1a12402a0d985914c7",
"text": "Alignment of 3D objects from 2D images is one of the most important and well studied problems in computer vision. A typical object alignment system consists of a landmark appearance model which is used to obtain an initial shape and a shape model which refines this initial shape by correcting the initialization errors. Since errors in landmark initialization from the appearance model propagate through the shape model, it is critical to have a robust landmark appearance model. While there has been much progress in designing sophisticated and robust shape models, there has been relatively less progress in designing robust landmark detection models. In this paper we present an efficient and robust landmark detection model which is designed specifically to minimize localization errors thereby leading to state-of-the-art object alignment performance. We demonstrate the efficacy and speed of the proposed approach on the challenging task of multi-view car alignment.",
"title": ""
},
{
"docid": "b3dd3c4325f4ef963d1bf4b5c64816c0",
"text": "The Internet was originally designed to facilitate communication and research activities. However, the dramatic increase in the use of the Internet in recent years has led to pathological use (Internet addiction). This study is a preliminary investigation of the extent of Internet addiction in school children 16-18 years old in India. The Davis Online Cognition Scale (DOCS) was used to assess pathological Internet use. On the basis of total scores obtained (N = 100) on the DOCS, two groups were identified--dependents (18) and non-dependents (21), using mean +/- 1/2 SD as the criterion for selection. The UCLA loneliness scale was also administered to the subjects. Significant behavioral and functional usage differences were revealed between the two groups. Dependents were found to delay other work to spend time online, lose sleep due to late-night logons, and feel life would be boring without the Internet. The hours spent on the Internet by dependents were greater than those of non-dependents. On the loneliness measure, significant differences were found between the two groups, with the dependents scoring higher than the non-dependents.",
"title": ""
},
{
"docid": "7f27e9b29e6ed2800ef850e6022d29ba",
"text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.",
"title": ""
}
] |
scidocsrr
|
71c69b194500c77b8a0ee5ecae888e4e
|
Hooked on Facebook: The Role of Social Anxiety and Need for Social Assurance in Problematic Use of Facebook
|
[
{
"docid": "89c9ad792245fc7f7e7e3b00c1e8147a",
"text": "Contrasting hypotheses were posed to test the effect of Facebook exposure on self-esteem. Objective Self-Awareness (OSA) from social psychology and the Hyperpersonal Model from computer-mediated communication were used to argue that Facebook would either diminish or enhance self-esteem respectively. The results revealed that, in contrast to previous work on OSA, becoming self-aware by viewing one's own Facebook profile enhances self-esteem rather than diminishes it. Participants that updated their profiles and viewed their own profiles during the experiment also reported greater self-esteem, which lends additional support to the Hyperpersonal Model. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self.",
"title": ""
}
] |
[
{
"docid": "e9e263a89f071f87f03199293fbeba77",
"text": "Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain Carolina Center for Interdisciplinary Applied Mathematics, Department of Mathematics, University of North Carolina, Chapel Hill, NC 27599-3250, USA Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, OX2 6GG, UK; CABDyN Complexity Centre, University of Oxford, Oxford OX1 1HP, UK; and Department of Mathematics, University of California, Los Angeles, California 90095, USA",
"title": ""
},
{
"docid": "ff619ce19b787d32aa78a6ac295d1f1d",
"text": "Mullerian duct anomalies (MDAs) are rare, affecting approximately 1% of all women and about 3% of women with poor reproductive outcomes. These congenital anomalies usually result from one of the following categories of abnormalities of the mullerian ducts: failure of formation (no development or underdevelopment) or failure of fusion of the mullerian ducts. The American Fertility Society (AFS) classification of uterine anomalies is widely accepted and includes seven distinct categories. MR imaging has consolidated its role as the imaging modality of choice in the evaluation of MDA. MRI is capable of demonstrating the anatomy of the female genital tract remarkably well and is able to provide detailed images of the intra-uterine zonal anatomy, delineate the external fundal contour of the uterus, and comprehensively image the entire female pelvis in multiple imaging planes in a single examination. The purpose of this pictorial essay is to show the value of MRI in the diagnosis of MDA and to review the key imaging features of anomalies of formation and fusion, emphasizing the relevance of accurate diagnosis before therapeutic intervention.",
"title": ""
},
{
"docid": "49002be42dfa6e6998e6975203357e3b",
"text": "In this paper, we present a new tone mapping algorithm for the display of high dynamic range images, inspired by adaptive process of the human visual system. The proposed algorithm is based on the center-surround Retinex processing. In our method, the local details are enhanced according to a non-linear adaptive spatial filter (Gaussian filter), whose shape (filter variance) is adapted to high-contrast edges of the image. Thus our method does not generate halo artifacts meanwhile preserves visibility and contrast impression of high dynamic range scenes in the common display devices. The proposed method is tested on a variety of HDR images and the results show the good performance of our method in terms of visual quality.",
"title": ""
},
{
"docid": "56a6ea3418b9a1edf591b860f128ea82",
"text": "Convolutional Neural Networks (CNNs) have gained a remarkable success on many real-world problems in recent years. However, the performance of CNNs is highly relied on their architectures. For some state-of-the-art CNNs, their architectures are hand-crafted with expertise in both CNNs and the investigated problems. To this end, it is difficult for researchers, who have no extended expertise in CNNs, to explore CNNs for their own problems of interest. In this paper, we propose an automatic architecture design method for CNNs by using genetic algorithms, which is capable of discovering a promising architecture of a CNN on handling image classification tasks. The proposed algorithm does not need any pre-processing before it works, nor any post-processing on the discovered CNN, which means it is completely automatic. The proposed algorithm is validated on widely used benchmark datasets, by comparing to the state-of-the-art peer competitors covering eight manually designed CNNs, four semi-automatically designed CNNs and additional four automatically designed CNNs. The experimental results indicate that the proposed algorithm achieves the best classification accuracy consistently among manually and automatically designed CNNs. Furthermore, the proposed algorithm also shows the competitive classification accuracy to the semi-automatic peer competitors, while reducing 10 times of the parameters. In addition, on the average the proposed algorithm takes only one percentage of computational resource compared to that of all the other architecture discovering algorithms. Experimental codes and the discovered architectures along with the trained weights are made public to the interested readers.",
"title": ""
},
{
"docid": "cea20aad38c5ca08bc2a07bde39ba2d0",
"text": "The existing snow/rain removal methods often fail for heavy snow/rain and dynamic scene. One reason for the failure is due to the assumption that all the snowflakes/rain streaks are sparse in snow/rain scenes. The other is that the existing methods often can not differentiate moving objects and snowflakes/rain streaks. In this paper, we propose a model based on matrix decomposition for video desnowing and deraining to solve the problems mentioned above. We divide snowflakes/rain streaks into two categories: sparse ones and dense ones. With background fluctuations and optical flow information, the detection of moving objects and sparse snowflakes/rain streaks is formulated as a multi-label Markov Random Fields (MRFs). As for dense snowflakes/rain streaks, they are considered to obey Gaussian distribution. The snowflakes/rain streaks, including sparse ones and dense ones, in scene backgrounds are removed by low-rank representation of the backgrounds. Meanwhile, a group sparsity term in our model is designed to filter snow/rain pixels within the moving objects. Experimental results show that our proposed model performs better than the state-of-the-art methods for snow and rain removal.",
"title": ""
},
{
"docid": "e3b707ad340b190393d3384a1a364e63",
"text": "ed Log Lines Categorize Bins Figure 3. High-level overview of our approach for abstracting execution logs to execution events. Table III. Log lines used as a running example to explain our approach. 1. Start check out 2. Paid for, item=bag, quality=1, amount=100 3. Paid for, item=book, quality=3, amount=150 4. Check out, total amount is 250 5. Check out done Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 257 Table IV. Running example logs after the anonymize step. 1. Start check out 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4. Check out, total amount=$v 5. Check out done Table V. Running example logs after the tokenize step. Bin names (no. of words, no. of parameters) Log lines (3,0) 1. Start check out 5. Check out done (5,1) 4. Check out, total amount=$v (8,3) 2. Paid for, item=$v, quality=$v, amount=$v 3. Paid for, item=$v, quality=$v, amount=$v 4.2.2. The tokenize step The tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and estimated parameters in each log line. The use of multiple bins limits the search space of the following step (i.e., the categorize step). The use of bins permits us to process large log files in a timely fashion using a limited memory footprint since the analysis is done per bin instead of having to load up all the lines in the log file. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and parameters are placed in the same bin. Table V shows the sample log lines after the anonymize and tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table VI shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus, the second and third log lines are grouped together in the (8,3) bin. Similarly, the first and last log lines are grouped together in the (3,0) bin since they both contain 3 words and are likely to contain no parameters. 4.2.3. The categorize step The categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution events database for future references. The algorithm used in the categorize step is shown below. Our algorithm goes through the log lines Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 258 Z. M. JIANG ET AL. Table VI. Running example logs after the categorize step. Execution events (word parameter id) Log lines 3 0 1 1. Start check out 3 0 2 5. Check out done 5 1 1 4. Check out, total amount=$v 8 3 1 2. Paid for, item=$v, quality=$v, amount=$v 8 3 1 3. Paid for, item=$v, quality=$v, amount=$v bin by bin. After this step, each log line should be abstracted to an execution event. Table VI shows the results of our working example after the categorize step. for each bin bi for each log line lk in bin bi for each execution event e(bi , j) corresponding to bi in the events DB perform word by word comparison between e(bi , j) and lk if (there is no difference) then lk is of type e(bi , j) break end if end for // advance to next e(bi , j) if ( lk does not have a matching execution event) then lk is a new execution event store an abstracted lk into the execution events DB end if end for // advance to the next log line end for // advance to the next bin We now explain our algorithm using the running example. Our algorithm starts with the (3,0) bin. Initially, there are no execution events that correspond to this bin yet. Therefore, the execution event corresponding to the first log line becomes the first execution event namely 3 0 1. The 1 at the end of 3 0 1 indicates that this is the first execution event to correspond to the bin, which has 3 words and no parameters (i.e., bin 3 0). Then the algorithm moves to the next log line in the (3,0) bin, which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events in the (3,0) bin. Currently, there is only one execution event: 3 0 1. As the fifth log line is not similar to the 3 0 1 execution event, we create a new execution event 3 0 2 for the fifth log line. With all the log lines in the (3,0) bin processed, we can move on to the (5,1) bin. As there are no execution events that correspond to the (5,1) bin initially, the fourth log line gets assigned to a new execution event 5 1 1. Finally, we move on to the (8,3) bin. First, the second log line gets assigned with a new execution event 8 3 1 since there are no execution events corresponding to this bin yet. As the third log line is the same as the second log line (after the anonymize step), the third log line is categorized as the same execution event as the second log Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr AN AUTOMATED APPROACH FOR ABSTRACTING EXECUTION LOGS 259 line. Table VI shows the sample log lines after the categorize step. The left column is the abstracted execution event. The right column shows the line number together with the corresponding log lines. 4.2.4. The reconcile step Since the anonymize step uses heuristics to identify dynamic information in a log line, there is a chance that we might miss to anonymize some dynamic information. The missed dynamic information will result in the abstraction of several log lines to several execution events that are very similar. Table VII shows an example of dynamic information that was missed by the anonymize step. The table shows five different execution events. However, the user names after ‘for user’ are dynamic information and should have been replaced by the generic token ‘$v’. All the log lines shown in Table VII should have been abstracted to the same execution event after the categorize step. The reconcile step addresses this situation. All execution events are re-examined to identify which ones are to be merged. Execution events are merged if: 1. They belong to the same bin. 2. They differ from each other by one token at the same positions. 3. There exists a few of such execution events. We used a threshold of five events in our case studies. Other values are possibly based on the content of the analyzed log files. The threshold prevents the merging of similar yet different execution events, such as ‘Start processing’ and ‘Stop processing’, which should not be merged. Looking at the execution events in Table VII, we note that they all belong to the ‘5 0’ bin and differ from each other only in the last token. Since there are five of such events, we merged them into one event. Table VIII shows the execution events from Table VII after the reconcile step. Note that if the ‘5 0’ bin contains another execution event: ‘Stop processing for user John’; it will not be merged with the above execution events since it differs by two tokens instead of only the last token. Table VII. Sample logs that the categorize step would fail to abstract. Event IDs Execution events 5 0 1 Start processing for user Jen 5 0 2 Start processing for user Tom 5 0 3 Start processing for user Henry 5 0 4 Start processing for user Jack 5 0 5 Start processing for user Peter Table VIII. Sample logs after the reconcile step. Event IDs Execution events 5 0 1 Start processing for user $v Copyright q 2008 John Wiley & Sons, Ltd. J. Softw. Maint. Evol.: Res. Pract. 2008; 20:249–267 DOI: 10.1002/smr 260 Z. M. JIANG ET AL.",
"title": ""
},
{
"docid": "60094e041c1be864ba8a636308b7ee12",
"text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.",
"title": ""
},
{
"docid": "a649a105b1d127c9c9ea2a9d4dad5d11",
"text": "Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart LS, the least squares embedding. Our comparative study of LS and AE reveals that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. When the embedding is encoded in the angular space, we not only have a nonconvex error function that delivers robustness, but also have a Hermitian graph Laplacian that completely determines the optimum and delivers efficiency. The high quality of embedding by AE in the presence of outliers can hardly be matched by LS, its corresponding L1 norm formulation, or their bounded versions. These results suggest that the key to overcoming outliers lies not with additionally imposing constraints on the embedding solution, but with adaptively penalizing inconsistency between measurements themselves. AE thus significantly advances statistical ranking methods by removing the impact of outliers directly without explicit inconsistency characterization, and advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization.",
"title": ""
},
{
"docid": "549f8fe6d456a818c36976c7e47e4033",
"text": "Given the rapid proliferation of trajectory-based approaches to study clinical consequences to stress and potentially traumatic events (PTEs), there is a need to evaluate emerging findings. This review examined convergence/divergences across 54 studies in the nature and prevalence of response trajectories, and determined potential sources of bias to improve future research. Of the 67 cases that emerged from the 54 studies, the most consistently observed trajectories following PTEs were resilience (observed in: n = 63 cases), recovery (n = 49), chronic (n = 47), and delayed onset (n = 22). The resilience trajectory was the modal response across studies (average of 65.7% across populations, 95% CI [0.616, 0.698]), followed in prevalence by recovery (20.8% [0.162, 0.258]), chronicity (10.6%, [0.086, 0.127]), and delayed onset (8.9% [0.053, 0.133]). Sources of heterogeneity in estimates primarily resulted from substantive population differences rather than bias, which was observed when prospective data is lacking. Overall, prototypical trajectories have been identified across independent studies in relatively consistent proportions, with resilience being the modal response to adversity. Thus, trajectory models robustly identify clinically relevant patterns of response to potential trauma, and are important for studying determinants, consequences, and modifiers of course following potential trauma.",
"title": ""
},
{
"docid": "00bfce08da755a4e139ae4507ed28141",
"text": "Multiple-view stereo reconstruction is a key step in image-based 3D acquisition and patchmatch based method is suited for large scale scene reconstruction. In this paper we extend the two-view patchmatch stereo to multiple-view in the multiple-view stereo pipeline. The key of the proposed method is to select multiple suitable neighboring images for a reference image, compute the depth-maps and merge the depth-maps. Experimental results on benchmark data sets demonstrate the accuracy and efficiency of the proposed method.",
"title": ""
},
{
"docid": "ec26505d813ed98ac3f840ea54358873",
"text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.",
"title": ""
},
{
"docid": "83981d52eb5e58d6c2d611b25c9f6d12",
"text": "This tutorial provides an introduction to Simultaneous Localisation and Mapping (SLAM) and the extensive research on SLAM that has been undertaken over the past decade. SLAM is the process by which a mobile robot can build a map of an environment and at the same time use this map to compute it’s own location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. Part I of this tutorial (this paper), describes the probabilistic form of the SLAM problem, essential solution methods and significant implementations. Part II of this tutorial will be concerned with recent advances in computational methods and new formulations of the SLAM problem for large scale and complex environments.",
"title": ""
},
{
"docid": "2746acb7d620802e949bef7fb855bfa7",
"text": "Our research approach is to design and develop reliable, efficient, flexible, economical, real-time and realistic wellness sensor networks for smart home systems. The heterogeneous sensor and actuator nodes based on wireless networking technologies are deployed into the home environment. These nodes generate real-time data related to the object usage and movement inside the home, to forecast the wellness of an individual. Here, wellness stands for how efficiently someone stays fit in the home environment and performs his or her daily routine in order to live a long and healthy life. We initiate the research with the development of the smart home approach and implement it in different home conditions (different houses) to monitor the activity of an inhabitant for wellness detection. Additionally, our research extends the smart home system to smart buildings and models the design issues related to the smart building environment; these design issues are linked with system performance and reliability. This research paper also discusses and illustrates the possible mitigation to handle the ISM band interference and attenuation losses without compromising optimum system performance.",
"title": ""
},
{
"docid": "99f62da011921c0ff51daf0c928c865a",
"text": "The Health Belief Model, social learning theory (recently relabelled social cognitive theory), self-efficacy, and locus of control have all been applied with varying success to problems of explaining, predicting, and influencing behavior. Yet, there is conceptual confusion among researchers and practitioners about the interrelationships of these theories and variables. This article attempts to show how these explanatory factors may be related, and in so doing, posits a revised explanatory model which incorporates self-efficacy into the Health Belief Model. Specifically, self-efficacy is proposed as a separate independent variable along with the traditional health belief variables of perceived susceptibility, severity, benefits, and barriers. Incentive to behave (health motivation) is also a component of the model. Locus of control is not included explicitly because it is believed to be incorporated within other elements of the model. It is predicted that the new formulation will more fully account for health-related behavior than did earlier formulations, and will suggest more effective behavioral interventions than have hitherto been available to health educators.",
"title": ""
},
{
"docid": "f9879c1592683bc6f3304f3937d5eee2",
"text": "Altered cell metabolism is a characteristic feature of many cancers. Aside from well-described changes in nutrient consumption and waste excretion, altered cancer cell metabolism also results in changes to intracellular metabolite concentrations. Increased levels of metabolites that result directly from genetic mutations and cancer-associated modifications in protein expression can promote cancer initiation and progression. Changes in the levels of specific metabolites, such as 2-hydroxyglutarate, fumarate, succinate, aspartate and reactive oxygen species, can result in altered cell signalling, enzyme activity and/or metabolic flux. In this Review, we discuss the mechanisms that lead to changes in metabolite concentrations in cancer cells, the consequences of these changes for the cells and how they might be exploited to improve cancer therapy.",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "4f04597c1c68cf1416ab98148477bd32",
"text": "This paper presents a new fuzzy switching median (FSM) filter employing fuzzy techniques in image processing. The proposed filter is able to remove salt-and-pepper noise in digital images while preserving image details and textures very well. By incorporating fuzzy reasoning in correcting the detected noisy pixel, the low complexity FSM filter is able to outperform some well known existing salt-and-pepper noise fuzzy and classical filters.",
"title": ""
},
{
"docid": "6b497321713a9725fef39b1f0e54acfa",
"text": "In today's time when data is generating by everyone at every moment, and the word is moving so fast with exponential growth of new technologies and innovations in all science and engineering domains, the age of big data is coming, and the potential of learning from this huge amount of data and from different sources is undoubtedly significant to uncover underlying structure and facilitate the development of more intelligent solution. Intelligence is around us, and the concept of big data and learning from it has existed since the emergence of the human being. In this article we focus on data from; sensors, images, and text, and we incorporate the principles of human intelligence; brain - body - environment, as a source of inspiration that allows us to put a new concept based on big data - machine learning--domain and pave the way for intelligent platform.",
"title": ""
},
{
"docid": "3e6c784e5432aab7f09e76b6d6d4e241",
"text": "A conformal, wearable and wireless system for continuously monitoring the local body sweat loss during exercise is demonstrated in this work. The sensor system includes a sweat absorber, an inter-digitated capacitance sensor, and a communication hub for data processing and transmission. Experimental results show that the sensor has excellent sensitivity and consistent response to sweat rate and level. A 150% variation in the sensor capacitance is observed with 50μL/cm2 of sweat collected in the absorber. During wear tests, the sensor system is placed on the subject's right anterior thigh for measuring the local sweat response during exercise (eg. running), and the measured sweat loss (147μL) was verified by the weight change within the absorbent material (144mg). With a conformal and wireless design, this system is ideal for applications in sport performance, dehydration monitoring, and health assessment.",
"title": ""
}
] |
scidocsrr
|
401b0c0e9a1dc9715ee2da3e2f5d8a34
|
Fast and sensitive mapping of nanopore sequencing reads with GraphMap.
|
[
{
"docid": "ee785105669d58052ad3b3a3954ba9fb",
"text": "Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.",
"title": ""
}
] |
[
{
"docid": "19c3bd8d434229d98741b04d3041286b",
"text": "The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO).",
"title": ""
},
{
"docid": "8d292592202c948c439f055ca5df9d56",
"text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.",
"title": ""
},
{
"docid": "e721ac8c351fd5450334e8e2c328a1fd",
"text": "Speech recognition is the process of converting an acoustic waveform into the text similar to the information being conveyed by the speaker. In this paper implementation of isolated words and connected words Automatic Speech Recognition system (ASR) for the words of Hindi language will be discussed. The HTK (hidden markov model toolkit) based on Hidden Markov Model (HMM), a statistical approach, is used to develop the system. Initially the system is trained for 100 distinct Hindi words .This paper also describes the working of HTK tool, which is used in various phases of ASR system, by presenting a detailed architecture of an ASR system developed using various HTK library modules and tools. The recognition results will show that the overall system accuracy for isolated words is 95% and for connected words is 90%.",
"title": ""
},
{
"docid": "b3fce50260d7f77e8ca294db9c6666f6",
"text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).",
"title": ""
},
{
"docid": "d486fca984c9cf930a4d1b4367949016",
"text": "In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.",
"title": ""
},
{
"docid": "d21308f9ffa990746c6be137964d2e12",
"text": "'Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers', This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "b8797251b01821e69fec564f0b2b91fb",
"text": "Spectral clustering enjoys its success in both data clustering and semisupervised learning. But, most spectral clustering algorithms cannot handle multi-class clustering problems directly. Additional strategies are needed to extend spectral clustering algorithms to multi-class clustering problems. Furthermore, most spectral clustering algorithms employ hard cluster membership, which is likely to be trapped by the local optimum. In this paper, we present a new spectral clustering algorithm, named “Soft Cut”. It improves the normalized cut algorithm by introducing soft membership, and can be efficiently computed using a bound optimization algorithm. Our experiments with a variety of datasets have shown the promising performance of the proposed clustering algorithm.",
"title": ""
},
{
"docid": "34e2eafd055e097e167afe7cb244f99b",
"text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.",
"title": ""
},
{
"docid": "d41ba1ea977a8ddfa50337d33d8ceeea",
"text": "We demonstrate a 36 times 12 times 0.9mm3 sized compact monolithic LTCC SiP transmitter (Tx) for 60GHz-band wireless communication terminal applications. Five GaAs MMICs including mixer, driver amplifier, power amplifier and two of frequency doublers have been integrated onto LTCC multilayer circuit which embeds a stripline BPF and a microstrip patch antenna. A novel CPW-to-stripline transition has been devised integrating air-cavities to minimize the associated attenuation. The fabricated transmitter achieves an output of 9dBm at a RF frequency of 60.4GHz, an IF frequency of 2.4GHz, and a LO frequency of 58GHz. The up-conversion gain is 11.2dB; while the LO signal is suppressed below 33.4dBc, and the spurious signal is also suppressed below 27.4dBc. This is the first report on the LTCC SiP transmitter integrating both a BPF and an antenna. A 60 GHz communication was demonstrated",
"title": ""
},
{
"docid": "31abfd6e4f6d9e56bc134ffd7c7b7ffc",
"text": "Online social networks like Facebook recommend new friends to users based on an explicit social network that users build by adding each other as friends. The majority of earlier work in link prediction infers new interactions between users by mainly focusing on a single network type. However, users also form several implicit social networks through their daily interactions like commenting on people’s posts or rating similarly the same products. Prior work primarily exploited both explicit and implicit social networks to tackle the group/item recommendation problem that recommends to users groups to join or items to buy. In this paper, we show that auxiliary information from the useritem network fruitfully combines with the friendship network to enhance friend recommendations. We transform the well-known Katz algorithm to utilize a multi-modal network and provide friend recommendations. We experimentally show that the proposed method is more accurate in recommending friends when compared with two single source path-based algorithms using both synthetic and real data sets.",
"title": ""
},
{
"docid": "b4cadd9179150203638ff9b045a4145d",
"text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.",
"title": ""
},
{
"docid": "b52cadf9e20eebfd388c09c51cff2d74",
"text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.",
"title": ""
},
{
"docid": "c44a580003362a5cbe28b8a38545a42d",
"text": "Emphatic algorithms are temporal-difference learning algorithms that change their effective state distribution by selectively emphasizing and de-emphasizing their updates on different time steps. Recent works by Sutton, Mahmood and White (2015), and Yu (2015) show that by varying the emphasis in a particular way, these algorithms become stable and convergent under off-policy training with linear function approximation. This paper serves as a unified summary of the available results from both works. In addition, we demonstrate the empirical benefits from the flexibility of emphatic algorithms, including state-dependent discounting, state-dependent bootstrapping, and the user-specified allocation of function approximation resources.",
"title": ""
},
{
"docid": "4318041c3cf82ce72da5983f20c6d6c4",
"text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.",
"title": ""
},
{
"docid": "58640b446a3c03ab8296302498e859a5",
"text": "With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.",
"title": ""
},
{
"docid": "8954cba9cba61a4630c7138fc8217d0e",
"text": "Comparison of image processing techniques is critically important in deciding which algorithm, method, or metric to use for enhanced image assessment. Image fusion is a popular choice for various image enhancement applications such as overlay of two image products, refinement of image resolutions for alignment, and image combination for feature extraction and target recognition. Since image fusion is used in many geospatial and night vision applications, it is important to understand these techniques and provide a comparative study of the methods. In this paper, we conduct a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion. The analysis can be applied to different image combination algorithms, image processing methods, and over a different choice of metrics that are of use to an image processing expert. The paper relates the results to an image quality measurement based on power spectrum and correlation analysis and serves as a summary of many contemporary techniques for objective assessment of image fusion algorithms.",
"title": ""
},
{
"docid": "c2540d54caffd6b3105d456666cecc9a",
"text": "A genome-wide association study identified a strong correlation between body mass index and the presence of a 21-kb copy number variation upstream of the human GPRC5B gene; however, the functional role of GPRC5B in obesity remains unknown. We report that GPRC5B-deficient mice were protected from diet-induced obesity and insulin resistance because of reduced inflammation in their white adipose tissue. GPRC5B is a lipid raft-associated transmembrane protein that contains multiple phosphorylated residues in its carboxyl terminus. Phosphorylation of GPRC5B by the tyrosine kinase Fyn and the subsequent direct interaction with Fyn through the Fyn Src homology 2 (SH2) domain were critical for the initiation and progression of inflammatory signaling in adipose tissue. We demonstrated that a GPRC5B mutant lacking the direct binding site for Fyn failed to activate a positive feedback loop of nuclear factor κB-inhibitor of κB kinase ε signaling. These findings suggest that GPRC5B may be a major node in adipose signaling systems linking diet-induced obesity to type 2 diabetes and may open new avenues for therapeutic approaches to diabetic progression.",
"title": ""
},
{
"docid": "f691d659e73042f40ebe19b5e29ddf14",
"text": "With the rapid development of knowledge base, question answering based on knowledge base has been a hot research issue. In this paper, we focus on answering singlerelation factoid questions based on knowledge base. We build a question answering system and study the effect of context information on fact selection, such as entity’s notable type, outdegree. Experimental results show that context information can improve the result of simple question answering.",
"title": ""
},
{
"docid": "3cb2bfb076e9c21526ec82c43188def5",
"text": "Voice is projected to be the next input interface for portable devices. The increased use of audio interfaces can be mainly attributed to the success of speech and speaker recognition technologies. With these advances comes the risk of criminal threats where attackers are reportedly trying to access sensitive information using diverse voice spoofing techniques. Among them, replay attacks pose a real challenge to voice biometrics. This paper addresses the problem by proposing a deep learning architecture in tandem with low-level cepstral features. We investigate the use of a deep neural network (DNN) to discriminate between the different channel conditions available in the ASVSpoof 2017 dataset, namely recording, playback and session conditions. The high-level feature vectors derived from this network are used to discriminate between genuine and spoofed audio. Two kinds of low-level features are utilized: state-ofthe-art constant-Q cepstral coefficients (CQCC), and our proposed high-frequency cepstral coefficients (HFCC) that derive from the high-frequency spectrum of the audio. The fusion of both features proved to be effective in generalizing well across diverse replay attacks seen in the evaluation of the ASVSpoof 2017 challenge, with an equal error rate of 11.5%, that is 53% better than the baseline Gaussian Mixture Model (GMM) applied on CQCC.",
"title": ""
},
{
"docid": "45e1a424ad0807ce49cd4e755bdd9351",
"text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.",
"title": ""
}
] |
scidocsrr
|
370a5de22021dd42b9c7c9c4286eab1c
|
Evolving Kolb : Experiential Education in the Age of Neuroscience
|
[
{
"docid": "0122b2fa61a4b29bd9a89a7e2c738e94",
"text": "CONCEPTUALIZATION This content downloaded from 206.87.46.46 on Wed, 26 Mar 2014 12:01:22 PM All use subject to JSTOR Terms and Conditions 2005 Kolb and Kolb 199 Is learning style a fixed trait or dynamic state? ELT clearly defines learning style as a dynamic state arising from an individual's preferential resolution of the dual dialectics of experiencing/conceptualizing and acting/reflecting. The stability and endurance of these states in individuals comes not solely from fixed genetic qualities or characteristics of human beings: nor, for that matter, does it come from the stable fixed demands of environmental circumstances. Rather, stable and enduring patterns of human individuality arise from consistent patterns of transaction between the individual and his or her environment . . . The way we process the possibilities of each new emerging event determines the range of choices and decisions we see. The choices and decisions we make to some extent determine the events we live through, and these events influence our future choices. Thus, people create themselves through the choice of actual occasions they live through (Kolb 1984: 63-64). Nonetheless, in practice and research there is a marked tendency to treat learning style as a fixed personality trait (e.g., Garner, 2000). Individuals often refer to themselves and others as though learning style was a fixed characteristic: \"I have trouble making decisions because I am a diverger.\" \"He likes to work alone because he is an assimilator.\" To emphasize the dynamic nature of learning style, the latest version of the LSI has changed the style names from diverger to diverging, and so on.",
"title": ""
}
] |
[
{
"docid": "b120095067684a67fe3327d18860e760",
"text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.",
"title": ""
},
{
"docid": "442cb6a0eac02c002da9f61331aa48d0",
"text": "Variational methods have been recently considered for scaling the training process of Gaussian process classifiers to large datasets. As an alternative, we describe here how to train these classifiers efficiently using expectation propagation (EP). The proposed EP method allows to train Gaussian process classifiers on very large datasets, with millions of instances, that were out of the reach of previous implementations of EP. More precisely, it can be used for (i) training in a distributed fashion where the data instances are sent to different nodes in which the required computations are carried out, and for (ii) maximizing an estimate of the marginal likelihood using a stochastic approximation of the gradient. Several experiments involving large datasets show that the method described is competitive with the variational approach.",
"title": ""
},
{
"docid": "3603e3d676a3ccae0c2ad18dc914b6a1",
"text": "In large storage systems, it is crucial to protect data from loss due to failures. Erasure codes lay the foundation of this protection, enabling systems to reconstruct lost data when components fail. Erasure codes can however impose significant performance overhead in two core operations: encoding, where coding information is calculated from newly written data, and decoding, where data is reconstructed after failures. This paper focuses on improving the performance of encoding, the more frequent operation. It does so by scheduling the operations of XOR-based erasure codes to optimize their use of cache memory. We call the technique XORscheduling and demonstrate how it applies to a wide variety of existing erasure codes. We conduct a performance evaluation of scheduling these codes on a variety of processors and show that XOR-scheduling significantly improves upon the traditional approach. Hence, we believe that XORscheduling has great potential to have wide impact in practical storage systems.",
"title": ""
},
{
"docid": "b35d58ad8987bb4fd9d7df2c09a4daab",
"text": "Visual search is necessary for rapid scene analysis because information processing in the visual system is limited to one or a few regions at one time [3]. To select potential regions or objects of interest rapidly with a task-independent manner, the so-called \"visual saliency\", is important for reducing the complexity of scenes. From the perspective of engineering, modeling visual saliency usually facilitates subsequent higher visual processing, such as image re-targeting [10], image compression [12], object recognition [16], etc. Visual attention model is deeply studied in recent decades. Most of existing models are built on the biologically-inspired architecture based on the famous Feature Integration Theory (FIT) [19, 20]. For instance, Itti et al. proposed a famous saliency model which computes the saliency map with local contrast in multiple feature dimensions, such as color, orientation, etc. [15] [23]. However, FIT-based methods perhaps risk being immersed in local saliency (e.g., object boundaries), because they employ local contrast of features in limited regions and ignore the global information. Visual attention models usually provide location information of the potential objects, but miss some object-related information (e.g., object surfaces) that is necessary for further object detection and recognition. Distinguished from FIT, Guided Search Theory (GST) [3] [24] provides a mechanism to search the regions of interest (ROI) or objects with the guidance from scene layout or top-down sources. The recent version of GST claims that the visual system searches objects of interest along two parallel pathways, i.e., the non-selective pathway and the selective pathway [3]. This new visual search strategy allows observers to extract spatial layout (or gist) information rapidly from entire scene via non-selective pathway. Then, this context information of scene acts as top-down modulation to guide the salient object search along the selective pathway. This two-pathway-based search strategy provides a parallel processing of global and local information for rapid visual search. Referring to the GST, we assume that the non-selective pathway provides \"where\" information and prior of multiple objects for visual search, a counterpart to visual selective saliency, and we use certain simple and fast fixation prediction method to provide an initial estimate of where the objects present. At the same time, the bottom-up visual selective pathway extracts fine image features in multiple cue channels, which could be regarded as a counterpart to the \"what\" pathway in visual system for object recognition. When these bottom-up features meet \"where\" information of objects, the visual system …",
"title": ""
},
{
"docid": "b79bb5a3c32b7e37aa82d85ce0f34dd6",
"text": "Traditional word embedding approaches learn semantic information at word level while ignoring the meaningful internal structures of words like morphemes. Furthermore, existing morphology-based models directly incorporate morphemes to train word embeddings, but still neglect the latent meanings of morphemes. In this paper, we explore to employ the latent meanings of morphological compositions of words to train and enhance word embeddings. Based on this purpose, we propose three Latent Meaning Models (LMMs), named LMM-A, LMM-S and LMM-M respectively, which adopt different strategies to incorporate the latent meanings of morphemes during the training process. Experiments on word similarity, syntactic analogy and text classification are conducted to validate the feasibility of our models. The results demonstrate that our models outperform the baselines on five word similarity datasets. On Wordsim-353 and RG-65 datasets, our models nearly achieve 5% and 7% gains over the classic CBOW model, respectively. For the syntactic analogy and text classification tasks, our models also surpass all the baselines including a morphology-based model.",
"title": ""
},
{
"docid": "7f652be9bde8f47d166e7bbeeb3a535b",
"text": "One of the problems often associated with online anonymity is that it hinders social accountability, as substantiated by the high levels of cybercrime. Although identity cues are scarce in cyberspace, individuals often leave behind textual identity traces. In this study we proposed the use of stylometric analysis techniques to help identify individuals based on writing style. We incorporated a rich set of stylistic features, including lexical, syntactic, structural, content-specific, and idiosyncratic attributes. We also developed the Writeprints technique for identification and similarity detection of anonymous identities. Writeprints is a Karhunen-Loeve transforms-based technique that uses a sliding window and pattern disruption algorithm with individual author-level feature sets. The Writeprints technique and extended feature set were evaluated on a testbed encompassing four online datasets spanning different domains: email, instant messaging, feedback comments, and program code. Writeprints outperformed benchmark techniques, including SVM, Ensemble SVM, PCA, and standard Karhunen-Loeve transforms, on the identification and similarity detection tasks with accuracy as high as 94% when differentiating between 100 authors. The extended feature set also significantly outperformed a baseline set of features commonly used in previous research. Furthermore, individual-author-level feature sets generally outperformed use of a single group of attributes.",
"title": ""
},
{
"docid": "b79575908a84a015c8a83d35c63e4f06",
"text": "This study examines the relation between stress and illness among bus drivers in a large American city. Several factors are identified that predict stress-related ill health for this occupational group. Canonical correlation techniques are used to combine daily work stress and recent stressful life events into a single life/work stress variate. Likewise, somatic symptoms and serious illness reports are combined into a single canonical illness variate. This procedure simplifies the analysis of multiple stress and illness indicators and also permits the statistical control of potential contaminating influences on stress and illness measures (eg, neuroticism). Discriminant function analysis identified four variables that differentiate bus drivers who get ill under high stress (N = 137) from those who remain healthy under stress (N = 137). Highly stressed and ill bus drivers use more avoidance coping behaviors, report more illness in their family medical histories, are low in the disposition of \"personality hardiness,\" and are also low in social assets. The derived stepwise discriminant function correctly classified 71% of cases in an independent \"hold-out\" sample. These results suggest fruitful areas of attention for health promotion and stress management programs in the public transit industry.",
"title": ""
},
{
"docid": "f1d581b521e0cb0aac67b83cee620848",
"text": "Now a days, a lot of applications are Internet based and in some cases it is desired that the communication be made secret, digital communication has become an essential part of infrastructure. Information hiding has an important research field to resolve the problems in network security, quality of service control & secure communication through public & private channels. Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video file. Steganography has various useful applications. Steganography’s ultimate objectives, which are undetectability, robustness (resistance to various image processing methods and compression) and capacity of the hidden data, are the main factors that separate it from related techniques such as watermarking and cryptography. This paper provides a state-of-the-art review of existing methods of steganography in digital images.",
"title": ""
},
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
},
{
"docid": "07bb2749c6a53b2390016c0f992131e8",
"text": "The complete resection of pituitary adenomas (PAs) is unlikely when there is an extensive local dural invasion and given that the molecular mechanisms remain primarily unknown. DNA microarray analysis was performed to identify differentially expressed genes between nonfunctioning invasive and noninvasive PAs. Gene clustering revealed a robust eightfold increase in matrix metalloproteinase (MMP)-9 expression in surgically resected human invasive PAs and in the (nonfunctioning) HP75 human pituitary tumor-derived cell line treated with phorbol-12-myristate-13-acetate; these results were confirmed by real-time polymerase chain reaction, gelatin zymography, reverse transcriptase-polymerase chain reaction, Western blot, immunohistochemistry, and Northern blot analyses. The activation of protein kinase C (PKC) increased both MMP-9 activity and expression, which were blocked by some PKC inhibitors (Gö6976, bisindolylmaleimide, and Rottlerin), PKC-alpha, and PKC-delta small interfering (si)RNAs but not by hispidin (PKC-beta inhibitor). In a transmembrane invasion assay, phorbol-12-myristate-13-acetate (100 nmol/L) increased the number of invaded HP75 cells, a process that was attenuated by PKC inhibitors, MMP-9 antibody, PKC-alpha siRNA, or PKC-delta siRNA. These results demonstrate that MMP-9 and PKC-alpha or PKC-delta may provide putative therapeutic targets for the control of PA dural invasion.",
"title": ""
},
{
"docid": "dfa611e19a3827c66ea863041a3ef1e2",
"text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.",
"title": ""
},
{
"docid": "609110c4bf31885d99618994306ef2cc",
"text": "This study examined the ability of a collagen solution to aid revascularization of necrotic-infected root canals in immature dog teeth. Sixty immature teeth from 6 dogs were infected, disinfected, and randomized into experimental groups: 1: no further treatment; 2: blood in canal; 3: collagen solution in canal, 4: collagen solution + blood, and 5: negative controls (left for natural development). Uncorrected chi-square analysis of radiographic results showed no statistical differences (p >or= 0.05) between experimental groups regarding healing of radiolucencies but a borderline statistical difference (p = 0.058) for group 1 versus group 4 for radicular thickening. Group 2 showed significantly more apical closure than group 1 (p = 0.03) and a borderline statistical difference (p = 0.051) for group 3 versus group 1. Uncorrected chi-square analysis revealed that there were no statistical differences between experimental groups for histological results. However, some roots in each of groups 1 to 4 (previously infected) showed positive histologic outcomes (thickened walls in 43.9%, apical closure in 54.9%, and new luminal tissue in 29.3%). Revascularization of disinfected immature dog root canal systems is possible.",
"title": ""
},
{
"docid": "6e67329e4f678ae9dc04395ae0a5b832",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "aa625f9e46914cb288fec3fd00fdcfda",
"text": "Battery modelling is a significant component of advanced Battery Management Systems (BMSs). The full electrochemical model of a battery can represent high precision battery behavior during its operation. However, the high computational requirement to solve the coupled nonlinear partial differential equations (PDEs) that define these models limits their applicability in an online BMS, especially for a battery pack containing hundreds of cells. Therefore, a reduced SPM-Three parameter model is proposed in this paper to efficiently model a lithium ion cell with high accuracy in a specific range of cell operation. The reduced model is implemented in a Simulink block for developing an advanced battery modelling tool that can be applied to a wide variety of battery applications.",
"title": ""
},
{
"docid": "1608c56c79af07858527473b2b0262de",
"text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.",
"title": ""
},
{
"docid": "dd45e122e64667faf1ea15904ecb48f5",
"text": "One major obstacle to enterprise adoption of cloud technologies has been the lack of visibility into migration effort and cost. In this paper, we present a methodology, called Cloud Migration Point (CMP), for estimating the size of cloud migration projects, by recasting a well-known software size estimation model called Function Point (FP) into the context of cloud migration. We empirically evaluate our CMP model by performing a cross-validation on six different small-scale cloud migration projects and show that our size estimation model can be used as a reliable predictor for effort estimation. Furthermore, we prove that our CMP model satisfies the fundamental properties of a software size measure.",
"title": ""
},
{
"docid": "05307b60bd185391919ea7c1bf1ce0ec",
"text": "Trace-level reuse is based on the observation that some traces (dynamic sequences of instructions) are frequently repeated during the execution of a program, and in many cases, the instructions that make up such traces have the same source operand values. The execution of such traces will obviously produce the same outcome and thus, their execution can be skipped if the processor records the outcome of previous executions. This paper presents an analysis of the performance potential of trace-level reuse and discusses a preliminary realistic implementation. Like instruction-level reuse, trace-level reuse can improve performance by decreasing resource contention and the latency of some instructions. However, we show that tracelevel reuse is more effective than instruction-level reuse because the former can avoid fetching the instructions of reused traces. This has two important benefits: it reduces the fetch bandwidth requirements, and it increases the effective instruction window size since these instructions do not occupy window entries. Moreover, trace-level reuse can compute all at once the result of a chain of dependent instructions, which may allow the processor to avoid the serialization caused by data dependences and thus, to potentially exceed the dataflow limit.",
"title": ""
},
{
"docid": "d02af961d8780a06ae0162647603f8bb",
"text": "We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.",
"title": ""
},
{
"docid": "120befb9cfd02d522ef807269ffc4c66",
"text": "Reading text in natural images has focused again the attention of many researchers during the last few years due to the increasingly availability of cheap image-capturing devices in low-cost products like mobile phones. Therefore, as text can be found on any environment, the applicability of text-reading systems is really extensive. For this purpose, we present in this paper a robust method to read text in natural images. It is composed of two main separated stages. Firstly, text is located in the image using a set of simple and fast-tocompute features highly discriminative between character and non-character objects. They are based on geometric and gradient properties. The second part of the system carries out the recognition of the previously detected text. It uses gradient features to recognize single characters and Dynamic Programming (DP) to correct misspelled words. Experimental results obtained with different challenging datasets show that the proposed system exceeds state-of-the-art performance, both in terms of localization and recognition.",
"title": ""
}
] |
scidocsrr
|
f21d3b4c42be068650806ba52098e91d
|
Multitask Parsing Across Semantic Representations
|
[
{
"docid": "208b4cb4dc4cee74b9357a5ebb2f739c",
"text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing",
"title": ""
},
{
"docid": "52fc069497d79f97e3470f6a9f322151",
"text": "We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.",
"title": ""
},
{
"docid": "dabfd831ec8eaf37f662db3c75e68a5b",
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to datadependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"title": ""
}
] |
[
{
"docid": "45b5072faafa8a26cfe320bd5faedbcd",
"text": "METIS-II was an EU-FET MT project running from October 2004 to September 2007, which aimed at translating free text input without resorting to parallel corpora. The idea was to use “basic” linguistic tools and representations and to link them with patterns and statistics from the monolingual target-language corpus. The METIS-II project has four partners, translating from their “home” languages Greek, Dutch, German, and Spanish into English. The paper outlines the basic ideas of the project, their implementation, the resources used, and the results obtained. It also gives examples of how METIS-II has continued beyond its lifetime and the original scope of the project. On the basis of the results and experiences obtained, we believe that the approach is promising and offers the potential for development in various directions.",
"title": ""
},
{
"docid": "5143548099c4d4dfd484d732ef210f62",
"text": "We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.\n The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.",
"title": ""
},
{
"docid": "259647f0899bebc4ad67fb30a8c6f69b",
"text": "Internet of Things (IoT) communication is vital for the developing of smart communities. The rapid growth of IoT depends on reliable wireless networks. The evolving 5G cellular system addresses this challenge by adopting cloud computing technology in Radio Access Network (RAN); namely Cloud RAN or CRAN. CRAN enables better scalability, flexibility, and performance that allows 5G to provide connectivity for the vast volume of IoT devices envisioned for smart cities. This work investigates the load balance (LB) problem in CRAN, with the goal of reducing latencies experienced by IoT communications. Eight practical LB algorithms are studied and evaluated in CRAN environment, based on real cellular network traffic characteristics provided by Nokia Research. Experiment results on queue-length analysis show that the simple, light-weight queue-based LB is almost as effectively as the much more complex waiting-time-based LB. We believe that this study is significant in enabling 5G networks for providing IoT communication backbone in the emerging smart communities; it also has wide applications in other distributed systems.",
"title": ""
},
{
"docid": "e87e844dac470472d6fafe73bc1f1ea7",
"text": "Liquidity production is a central function of banks. High leverage is optimal for banks in a model that has just enough frictions for banks to have a meaningful role in liquid-claim production. The model has a market premium for (socially valuable) safe/liquid debt, but no taxes or other traditional motives to lever up. Because only safe debt commands a liquidity premium, banks with risky assets use risk management to maximize their capacity to include such debt in the capital structure. The model can explain why banks have higher leverage than most operating firms, why risk management is central to banks' operating policies, why bank leverage increased over the last 150 years or so, and why leverage limits for regulated banks impede their ability to compete with unregulated shadow banks. & 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "1d074c67dec38a9459450ded74c54288",
"text": "The focus of this review is the evolving field of antithrombotic drug therapy for stroke prevention in patients with atrial fibrillation (AF). The current standard of therapy includes warfarin, acenocoumarol and phenprocoumon which have proven efficacy by reducing stroke by 68% against placebo. However, a narrow therapeutic index, wide variation in metabolism, and numerous food and drug interactions have limited their clinical application to only 50% of the indicated population. Newer agents such as direct thrombin inhibitors, factor Xa inhibitors, factor IX inhibitors, tissue factor inhibitors and a novel vitamin K antagonist are being developed to overcome the limitations of current agents. The direct thrombin inhibitor dabigatran is farthest along in development. Further clinical trial testing, and eventual incorporation into clinical practice will depend on safety, efficacy and cost. Development of a novel vitamin K antagonist with better INR control will challenge the newer mechanistic agents in their quest to replace the existing vitamin K antagonists. Till then, the large unfilled gap to replace conventional agents remains open. This review will assess all these agents, and compare their mechanism of action, stage of development and pharmacologic profile.",
"title": ""
},
{
"docid": "9888a7723089d2f1218e6e1a186a5e91",
"text": "This classic text offers you the key to understanding short circuits, open conductors and other problems relating to electric power systems that are subject to unbalanced conditions. Using the method of symmetrical components, acknowledged expert Paul M. Anderson provides comprehensive guidance for both finding solutions for faulted power systems and maintaining protective system applications. You'll learn to solve advanced problems, while gaining a thorough background in elementary configurations. Features you'll put to immediate use: Numerous examples and problems Clear, concise notation Analytical simplifications Matrix methods applicable to digital computer technology Extensive appendices",
"title": ""
},
{
"docid": "df8ceb0f804a8dca7375286541866f5f",
"text": "We propose a new model for unsupervised document embedding. Leading existing approaches either require complex inference or use recurrent neural networks (RNN) that are difficult to parallelize. We take a different route and develop a convolutional neural network (CNN) embedding model. Our CNN architecture is fully parallelizable resulting in over 10x speedup in inference time over RNN models. Parallelizable architecture enables to train deeper models where each successive layer has increasingly larger receptive field and models longer range semantic structure within the document. We additionally propose a fully unsupervised learning algorithm to train this model based on stochastic forward prediction. Empirical results on two public benchmarks show that our approach produces comparable to state-of-the-art accuracy at a fraction of computational cost.",
"title": ""
},
{
"docid": "e1b536458ddc8603b281bac69e6bd2e8",
"text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.",
"title": ""
},
{
"docid": "e2fd61cef4ec32c79b059552e7820092",
"text": "This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of 19% (and up to 75% gain) across a wide variety of networks and embedding methods.",
"title": ""
},
{
"docid": "4247314290ffa50098775e2bbc41b002",
"text": "Heterogeneous integration enables the construction of silicon (Si) photonic systems, which are fully integrated with a range of passive and active elements including lasers and detectors. Numerous advancements in recent years have shown that heterogeneous Si platforms can be extended beyond near-infrared telecommunication wavelengths to the mid-infrared (MIR) (2–20 μm) regime. These wavelengths hold potential for an extensive range of sensing applications and the necessary components for fully integrated heterogeneous MIR Si photonic technologies have now been demonstrated. However, due to the broad wavelength range and the diverse assortment of MIR technologies, the optimal platform for each specific application is unclear. Here, we overview Si photonic waveguide platforms and lasers at the MIR, including quantum cascade lasers on Si. We also discuss progress toward building an integrated multispectral source, which can be constructed by wavelength beam combining the outputs from multiple lasers with arrayed waveguide gratings and duplexing adiabatic couplers.",
"title": ""
},
{
"docid": "f0245dca8cc1d3c418c0d915c7982484",
"text": "The injection of a high-frequency signal in the stator via inverter has been shown to be a viable option to estimate the magnet temperature in permanent-magnet synchronous machines (PMSMs). The variation of the magnet resistance with temperature is reflected in the stator high-frequency resistance, which can be measured from the resulting current when a high-frequency voltage is injected. However, this method is sensitive to d- and q-axis inductance (Ld and Lq) variations, as well as to the machine speed. In addition, it is only suitable for surface PMSMs (SPMSMs) and inadequate for interior PMSMs (IPMSMs). In this paper, the use of a pulsating high-frequency current injection in the d-axis of the machine for temperature estimation purposes is proposed. The proposed method will be shown to be insensitive to the speed, Lq, and Ld variations. Furthermore, it can be used with both SPMSMs and IPMSMs.",
"title": ""
},
{
"docid": "1df103aef2a4a5685927615cfebbd1ea",
"text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.",
"title": ""
},
{
"docid": "a208e4f4e6092a731d4ec662c1cea1bc",
"text": "The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) single-user matched filtering, c) decorrelation, and d) MMSE linear processing.",
"title": ""
},
{
"docid": "244be1e978813811e3f5afc1941cd4f5",
"text": "In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.",
"title": ""
},
{
"docid": "ba2748bc46a333faf5859e2747534b7c",
"text": "A plethora of words are used to describe the spectrum of human emotions, but how many emotions are there really, and how do they interact? Over the past few decades, several theories of emotion have been proposed, each based around the existence of a set of basic emotions, and each supported by an extensive variety of research including studies in facial expression, ethology, neurology and physiology. Here we present research based on a theory that people transmit their understanding of emotions through the language they use surrounding emotion keywords. Using a labelled corpus of over 21,000 tweets, six of the basic emotion sets proposed in existing literature were analysed using Latent Semantic Clustering (LSC), evaluating the distinctiveness of the semantic meaning attached to the emotional label. We hypothesise that the more distinct the language is used to express a certain emotion, then the more distinct the perception (including proprioception) of that emotion is, and thus more basic. This allows us to select the dimensions best representing the entire spectrum of emotion. We find that Ekman’s set, arguably the most frequently used for classifying emotions, is in fact the most semantically distinct overall. Next, taking all analysed (that is, previously proposed) emotion terms into account, we determine the optimal semantically irreducible basic emotion set using an iterative LSC algorithm. Our newly-derived set (Accepting, Ashamed, Contempt, Interested, Joyful, Pleased, Sleepy, Stressed) generates a 6.1% increase in distinctiveness over Ekman’s set (Angry, Disgusted, Joyful, Sad, Scared). We also demonstrate how using LSC data can help visualise emotions. We introduce the concept of an Emotion Profile and briefly analyse compound emotions both visually and mathematically.",
"title": ""
},
{
"docid": "a2217cd5f5e6b54ad0329a8703204ccb",
"text": "Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE—a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.",
"title": ""
},
{
"docid": "26d0f9ea9e939cd09d1572965127e030",
"text": "The emergence of “Fake News” and misinformation via online news and social media has spurred an interest in computational tools to combat this phenomenon. In this paper we present a new “Related Fact Checks” service, which can help a reader critically evaluate an article and make a judgment on its veracity by bringing up fact checks that are relevant to the article. We describe the core technical problems that need to be solved in building a “Related Fact Checks” service, and present results from an evaluation of an implementation.",
"title": ""
},
{
"docid": "7e647cac9417bf70acd8c0b4ee0faa9b",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
},
{
"docid": "939b2faa63e24c0f303b823481682c4c",
"text": "Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion.",
"title": ""
}
] |
scidocsrr
|
af1e91c91e2bf42874f17228fffdcc63
|
A Nested Two Stage Game-Based Optimization Framework in Mobile Cloud Computing System
|
[
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
}
] |
[
{
"docid": "7223f14d3ea2d10661185c8494b81438",
"text": "In 1990 the molecular basis for a hereditary disorder in humans, hyperkalemic periodic paralysis, was first genetically demonstrated to be impaired ion channel function. Since then over a dozen diseases, now termed as channelopathies, have been described. Most of the disorders affect excitable tissue such as muscle and nerve; however, kidney diseases have also been described. Basic research on structure-function relationships and physiology of excitation has benefited tremendously from the discovery of disease-causing mutations pointing to regions of special significance within the channel proteins. This course focuses mainly on the clinical and genetic features of neurological disturbances in humans caused by genetic defects in voltage-gated sodium, calcium, potassium, and chloride channels. Disorders of skeletal muscle are by far the most studied and therefore more detailed in this text than the neuronal channelopathies which have been discovered only very recently. Review literature may be found in the attached reference list [1–12]. Skeletal muscle sodium channelopathies",
"title": ""
},
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "8b3042021e48c86873e00d646f65b052",
"text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.",
"title": ""
},
{
"docid": "512cbe93bf292c5e4836d50b8aaac6b7",
"text": "This paper describes a new approach to the problem of generating the class of all geodetic graphs homeomorphic to a given geodetic one. An algorithmic procedure is elaborated to carry out a systematic finding of such a class of graphs. As a result, the enumeration of the class of geodetic graphs homeomorphic to certain Moore graphs has been performed.",
"title": ""
},
{
"docid": "266b9bfde23fdfaedb35d293f7293c93",
"text": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.",
"title": ""
},
{
"docid": "8d5aa46d05de9da8cc4b65ea5c369edc",
"text": "Peer-to-peer (P2P) networks are very efficient for distributing content. We want to use this potential to allow not only distribution but collaborative editing of this content. Existing collaborative editing systems are centralised or depend on the number of sites. Such systems cannot scale when deployed on P2P networks. In this paper, we propose a new model for building a collaborative editing system. This model is fully decentralised and does not depend on the number of sites.",
"title": ""
},
{
"docid": "d11dba84257eef7979f5977a52ed315c",
"text": "We contrast two opposing approaches to building bots that autonomously learn to rap battle: a symbolic probabilistic approach based on induction of stochastic transduction grammars, versus a neural network approach based on backpropagation through unconventional transduction recursive autoassociative memory (TRAAM) models. Rap battling is modeled as a quasi-translation problem, in which an appropriate output response must be improvised given any input challenge line of lyrics. Both approaches attempt to tackle the difficult problem of compositionality: for any challenge line, constructing a good response requires making salient associations while satisfying contextual preferences at many different, overlapping levels of granularity between the challenge and response lines. The contextual preferences include fluency, partial metrical or syntactic parallelism, and rhyming at various points across the lines. During both the learning and improvisation stages, the symbolic approach attempts to explicitly enumerate as many hypotheses as possible, whereas the neural approach attempts to evolve vector representations that better implicitly generalize over soft regions or neighborhoods of hypotheses. The brute force symbolic approach is more precise, but quickly generates combinatorial numbers of hypotheses when searching for generalizations. The distributed vector based neural approach can more easily confuse hypotheses, but maintains a constant level of complexity while retaining its implicit generalization bias. We contrast both the theoretical formulation and experimental outputs of the two approaches.",
"title": ""
},
{
"docid": "181530396a384e0e8c8ed00bcd195e81",
"text": "Numerous problems encountered in real life cannot be actually formulated as a single objective problem; hence the requirement of Multi-Objective Optimization (MOO) had arisen several years ago. Due to the complexities in such type of problems powerful heuristic techniques were needed, which has been strongly satisfied by Swarm Intelligence (SI) techniques. Particle Swarm Optimization (PSO) has been established in 1995 and became a very mature and most popular domain in SI. MultiObjective PSO (MOPSO) established in 1999, has become an emerging field for solving MOOs with a large number of extensive literature, software, variants, codes and applications. This paper reviews all the applications of MOPSO in miscellaneous areas followed by the study on MOPSO variants in our next publication. An introduction to the key concepts in MOO is followed by the main body of review containing survey of existing work, organized by application area along with their multiple objectives, variants and further categorized variants.",
"title": ""
},
{
"docid": "32b4b275dc355dff2e3e168fe6355772",
"text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.",
"title": ""
},
{
"docid": "1e972c454587c5a3b24386f2b6ffc8fa",
"text": "Three classic cases and one exceptional case are reported. The unique case of decapitation took place in a traffic accident, while the others were seen after homicide, vehicle-assisted suicide, and after long-jump hanging. Thorough scene examinations were performed, and photographs from the scene were available in all cases. Through the autopsy of each case, the mechanism for the decapitation in each case was revealed. The severance lines were through the neck and the cervical vertebral column, except for in the motor vehicle accident case, where the base of skull was fractured. This case was also unusual as the mechanism was blunt force. In the homicide case, the mechanism was the use of a knife combined with a saw, while in the two last cases, a ligature made the cut through the neck. The different mechanisms in these decapitations are suggested.",
"title": ""
},
{
"docid": "76a44cf05ec89e965baf928d8082bd9b",
"text": "Strict head-final surface order derives from underlying left-headedness in Ijo , a Niger-Congo language of Nigeria. A word order anomaly in Ijo SERIAL VERB CONSTRUCTIONS (SVCs) strongly suggests this, and left-to-right asymmetric c-command among internal arguments of SVCs confirms it. The anomaly is universal among surface right-headed languages with SVCs, indicating that deep left-headedness is universal, as antisymmetry theory predicts (Kayne 1994). Assuming complements are in Specs, and that a light verb v selects every VP (Chomsky 1999), I derive VOVO from OVOV by two instances of V-to-v movement. I argue for a nonuniform approach to SVCs, involving relations of both raising (Campbell 1989) and control (Collins 1997). Other aspects of SVC word order are predictable from a universal thematic hierarchy nontheme theme, and short scrambling (Takano 1998).*",
"title": ""
},
{
"docid": "5f1269a603d68ab4faeadfcf9478fa0e",
"text": "A simple and inexpensive approach for extracting the threedimensional shape of objects is presented. It is based on `weak structured lighting'; it di ers from other conventional structured lighting approaches in that it requires very little hardware besides the camera: a desk-lamp, a pencil and a checkerboard. The camera faces the object, which is illuminated by the desk-lamp. The user moves a pencil in front of the light source casting a moving shadow on the object. The 3D shape of the object is extracted from the spatial and temporal location of the observed shadow. Experimental results are presented on three di erent scenes demonstrating that the error in reconstructing the surface is less than 1%.",
"title": ""
},
{
"docid": "6ca4d0021c11906bae4dbd5db9b47c80",
"text": "Writing code to interact with external devices is inherently difficult, and the added demands of writing device drivers in C for kernel mode compounds the problem. This environment is complex and brittle, leading to increased development costs and, in many cases, unreliable code. Previous solutions to this problem ignore the cost of migrating drivers to a better programming environment and require writing new drivers from scratch or even adopting a new operating system. We present Decaf Drivers, a system for incrementally converting existing Linux kernel drivers to Java programs in user mode. With support from programanalysis tools, Decaf separates out performance-sensitive code and generates a customized kernel interface that allows the remaining code to be moved to Java. With this interface, a programmer can incrementally convert driver code in C to a Java decaf driver. The Decaf Drivers system achieves performance close to native kernel drivers and requires almost no changes to the Linux kernel. Thus, Decaf Drivers enables driver programming to advance into the era of modern programming languages without requiring a complete rewrite of operating systems or drivers. With five drivers converted to Java, we show that Decaf Drivers can (1) move the majority of a driver’s code out of the kernel, (2) reduce the amount of driver code, (3) detect broken error handling at compile time with exceptions, (4) gracefully evolve as driver and kernel code and data structures change, and (5) perform within one percent of native kernel-only drivers.",
"title": ""
},
{
"docid": "c117a5fc0118f3ea6c576bb334759d59",
"text": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.",
"title": ""
},
{
"docid": "4584a3a2b0e1cb30ba1976bd564d74b9",
"text": "Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.",
"title": ""
},
{
"docid": "15b26ceb3a81f4af6233ab8a36f66d3f",
"text": "The number of web images has been explosively growing due to the development of network and storage technology. These images make up a large amount of current multimedia data and are closely related to our daily life. To efficiently browse, retrieve and organize the web images, numerous approaches have been proposed. Since the semantic concepts of the images can be indicated by label information, automatic image annotation becomes one effective technique for image management tasks. Most existing annotation methods use image features that are often noisy and redundant. Hence, feature selection can be exploited for a more precise and compact representation of the images, thus improving the annotation performance. In this paper, we propose a novel feature selection method and apply it to automatic image annotation. There are two appealing properties of our method. First, it can jointly select the most relevant features from all the data points by using a sparsity-based model. Second, it can uncover the shared subspace of original features, which is beneficial for multi-label learning. To solve the objective function of our method, we propose an efficient iterative algorithm. Extensive experiments are performed on large image databases that are collected from the web. The experimental results together with the theoretical analysis have validated the effectiveness of our method for feature selection, thus demonstrating its feasibility of being applied to web image annotation.",
"title": ""
},
{
"docid": "0f891d97853f7bbeeb81c665d49516a3",
"text": "The concern with electric and magnetic fields generated by transmission lines grew after the publication of the report by Wertheimer and Leeper (1979), pointing out a possible association between childhood cancer and these fields. As a precaution, standards were created establishing limits for human exposure to such fields, making them to be calculated and analyzed even in the design phase. In order to facilitate this analysis, this work proposes a software, based on MATLAB®, that calculates the fields generated by lines with multiple circuits. 2D and 3D graphics are provided as results. The validation of the proposal was made through several case studies, in which transmission lines of the Companhia Hidro Eletrica do São Francisco were analyzed by the software and the results were compared with the reports of studies made by the company. One of these case studies is presented.",
"title": ""
},
{
"docid": "65a4ec1b13d740ae38f7b896edb2eaff",
"text": "The problem of evolutionary network analysis has gained increasing attention in recent years, because of an increasing number of networks, which are encountered in temporal settings. For example, social networks, communication networks, and information networks continuously evolve over time, and it is desirable to learn interesting trends about how the network structure evolves over time, and in terms of other interesting trends. One challenging aspect of networks is that they are inherently resistant to parametric modeling, which allows us to truly express the edges in the network as functions of time. This is because, unlike multidimensional data, the edges in the network reflect interactions among nodes, and it is difficult to independently model the edge as a function of time, without taking into account its correlations and interactions with neighboring edges. Fortunately, we show that it is indeed possible to achieve this goal with the use of a matrix factorization, in which the entries are parameterized by time. This approach allows us to represent the edge structure of the network purely as a function of time, and predict the evolution of the network over time. This opens the possibility of using the approach for a wide variety of temporal network analysis problems, such as predicting future trends in structures, predicting links, and node-centric anomaly/event detection. This flexibility is because of the general way in which the approach allows us to express the structure of the network as a function of time. We present a number of experimental results on a number of temporal data sets showing the effectiveness of the approach.",
"title": ""
},
{
"docid": "2ae69330b32aa485876e26ecc78ca66d",
"text": "One of the promising usages of Physically Unclonable Functions (PUFs) is to generate cryptographic keys from PUFs for secure storage of key material. This usage has attractive properties such as physical unclonability and enhanced resistance against hardware attacks. In order to extract a reliable cryptographic key from a noisy PUF response a fuzzy extractor is used to convert non-uniform random PUF responses into nearly uniform randomness. Bösch et al. in 2008 proposed a fuzzy extractor suitable for efficient hardware implementation using two-stage concatenated codes, where the inner stage is a conventional error correcting code and the outer stage is a repetition code. In this paper we show that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully. For example, PUFs with min-entropy lower than 66% may yield zero leftover entropy in the generated key for some repetition code configurations. In addition, we find that many of the fuzzy extractor designs in the literature are too optimistic with respect to entropy estimation. For high security applications, we recommend a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors and present parameters for generating 128-bit keys from memory based PUFs.",
"title": ""
}
] |
scidocsrr
|
e12f1dea29965bfcd5908d69671d7e49
|
Access Control Models for Virtual Object Communication in Cloud-Enabled IoT
|
[
{
"docid": "c2571afd6f2b9e9856c8f8c4eeb60b81",
"text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.",
"title": ""
},
{
"docid": "a08fe0c015f5fc02b7654f3fd00fb599",
"text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.",
"title": ""
}
] |
[
{
"docid": "8eb96ae8116a16e24e6a3b60190cc632",
"text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.",
"title": ""
},
{
"docid": "8a6e062d17ee175e00288dd875603a9c",
"text": "Code summarization, aiming to generate succinct natural language description of source code, is extremely useful for code search and code comprehension. It has played an important role in software maintenance and evolution. Previous approaches generate summaries by retrieving summaries from similar code snippets. However, these approaches heavily rely on whether similar code snippets can be retrieved, how similar the snippets are, and fail to capture the API knowledge in the source code, which carries vital information about the functionality of the source code. In this paper, we propose a novel approach, named TL-CodeSum, which successfully uses API knowledge learned in a different but related task to code summarization. Experiments on large-scale real-world industry Java projects indicate that our approach is effective and outperforms the state-of-the-art in code summarization.",
"title": ""
},
{
"docid": "398c791338adf824a81a2bfb8f35c6bb",
"text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.",
"title": ""
},
{
"docid": "acddf623a4db29f60351f41eb8d0b113",
"text": "In an age where people are becoming increasing likely to trust information found through online media, journalists have begun employing techniques to lure readers to articles by using catchy headlines, called clickbait. These headlines entice the user into clicking through the article whilst not providing information relevant to the headline itself. Previous methods of detecting clickbait have explored techniques heavily dependent on feature engineering, with little experimentation having been tried with neural network architectures. We introduce a novel model combining recurrent neural networks, attention layers and image embeddings. Our model uses a combination of distributed word embeddings derived from unannotated corpora, character level embeddings calculated through Convolutional Neural Networks. These representations are passed through a bidirectional LSTM with an attention layer. The image embeddings are also learnt from large data using CNNs. Experimental results show that our model achieves an F1 score of 65.37% beating the previous benchmark of 55.21%.",
"title": ""
},
{
"docid": "fc875b50a03dcae5cbde23fa7f9b16bf",
"text": "Although considerable research has shown the importance of social connection for physical health, little is known about the higher-level neurocognitive processes that link experiences of social connection or disconnection with health-relevant physiological responses. Here we review the key physiological systems implicated in the link between social ties and health and the neural mechanisms that may translate social experiences into downstream health-relevant physiological responses. Specifically, we suggest that threats to social connection may tap into the same neural and physiological 'alarm system' that responds to other critical survival threats, such as the threat or experience of physical harm. Similarly, experiences of social connection may tap into basic reward-related mechanisms that have inhibitory relationships with threat-related responding. Indeed, the neurocognitive correlates of social disconnection and connection may be important mediators for understanding the relationships between social ties and health.",
"title": ""
},
{
"docid": "3f5097b33aab695678caca712b649a8f",
"text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.",
"title": ""
},
{
"docid": "d6602271d7024f7d894b14da52299ccc",
"text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.",
"title": ""
},
{
"docid": "bba6fad7d1d32683e95e475632c9a9e5",
"text": "A great variety of text tasks such as topic or spam identification, user profiling, and sentiment analysis can be posed as a supervised learning problem and tackle using a text classifier. A text classifier consists of several subprocesses, some of them are general enough to be applied to any supervised learning problem, whereas others are specifically designed to tackle a particular task, using complex and computational expensive processes such as lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we propose a minimalistic and wide system able to tackle text classification tasks independent of domain and language, namely μTC. It is composed by some easy to implement text transformations, text representations, and a supervised learning algorithm. These pieces produce a competitive classifier even in the domain of informally written text. We provide a detailed description of μTC along with an extensive experimental comparison with relevant state-of-the-art methods. μTC was compared on 30 different datasets. Regarding accuracy, μTC obtained the best performance in 20 datasets while achieves competitive results in the remaining 10. The compared datasets include several problems like topic and polarity classification, spam detection, user profiling and authorship attribution. Furthermore, it is important to state that our approach allows the usage of the technology even without knowledge of machine learning and natural language processing. ∗CONACyT Consejo Nacional de Ciencia y Tecnoloǵıa, Dirección de Cátedras, Insurgentes Sur 1582, Crédito Constructor 03940, Ciudad de México, México. †INFOTEC Centro de Investigación e Innovación en Tecnoloǵıas de la Información y Comunicación, Circuito Tecnopolo Sur No 112, Fracc. Tecnopolo Pocitos II, Aguascalientes 20313, México. ‡Centro de Investigación en Geograf́ıa y Geomática “Ing. Jorge L. Tamayo”, A.C. Circuito Tecnopolo Norte No. 117, Col. Tecnopolo Pocitos II, C.P. 20313,. Aguascalientes, Ags, México. 1 ar X iv :1 70 4. 01 97 5v 2 [ cs .C L ] 1 4 Se p 20 17",
"title": ""
},
{
"docid": "07425e53be0f6314d52e3b4de4d1b601",
"text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.",
"title": ""
},
{
"docid": "7ca7ec2efe89bc031cc8aa5ce549c7f5",
"text": "Conventional reverse vending machines use complex image processing technology to detect the bottles which make it more expensive. In this paper the design of a Smart Bottle Recycle Machine (SBRM) is presented. It is designed on a Field Programmable Gate Array (FPGA) using an ultrasonic range sensor which is readily available at a low cost. The sensor was used to calculate the number of bottles and distinguish between them. The main objective of this project is to build a SBRM at a cheaper production cost. This project was implemented on Altera DE2-115 board using Verilog HDL. This prototype enables the user to recycle plastic bottles and receive reward points. FPGA was chosen because hardware based implementation on a FPGA is usually much faster than the software based implementation on a microcontroller. The former is also capable of executing concurrent parallel processes at a high speed where the latter can only do a limited amount of parallel execution. So, overall FPGAs are more efficient than the microcontrollers for development of reliable and real time applications. The developed project is environment friendly and cost effective.",
"title": ""
},
{
"docid": "61d506905286fc3297622d1ac39534f0",
"text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.",
"title": ""
},
{
"docid": "6c5a5bc775316efc278285d96107ddc6",
"text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.",
"title": ""
},
{
"docid": "c61c350d6c7bfe7eaae2cd4b2aa452cf",
"text": "It is a well-established finding that the central executive is fractionated in at least three separable component processes: Updating, Shifting, and Inhibition of information (Miyake et al., 2000). However, the fractionation of the central executive among the elderly has been less well explored, and Miyake's et al. latent structure has not yet been integrated with other models that propose additional components, such as access to long-term information. Here we administered a battery of classic and newer neuropsychological tests of executive functions to 122 healthy individuals aged between 48 and 91 years. The test scores were subjected to a latent variable analysis (LISREL), and yielded four factors. The factor structure obtained was broadly consistent with Miyake et al.'s three-factor model. However, an additional factor, which was labeled 'efficiency of access to long-term memory', and a mediator factor ('speed of processing') were apparent in our structural equation analysis. Furthermore, the best model that described executive functioning in our sample of healthy elderly adults included a two-factor solution, thus indicating a possible mechanism of dedifferentiation, which involves larger correlations and interdependence of latent variables as a consequence of cognitive ageing. These results are discussed in the light of current models of prefrontal cortex functioning.",
"title": ""
},
{
"docid": "2e66317dfe4005c069ceac2d4f9e3877",
"text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.",
"title": ""
},
{
"docid": "739aaf487d6c5a7b7fe9d0157d530382",
"text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.",
"title": ""
},
{
"docid": "a15c94c0ec40cb8633d7174b82b70a16",
"text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,",
"title": ""
},
{
"docid": "fe25930abd98cba844a6e7a849dae621",
"text": "Research in Autonomous Mobile Manipulation critically depends on the availability of adequate experimental platforms. In this paper, we describe an ongoing effort at the University of Massachusetts Amherst to construct a hardware platform with redundant kinematic degrees of freedom, a comprehensive sensor suite, and significant end-effector capabilities for manipulation. In our research, we pursue an end-effector centric view of autonomous mobile manipulation. In support of this view, we are developing a comprehensive software suite to provide a high level of competency in robot control and perception. This software suite is based on a multi-objective, tasklevel motion control framework. We use this control framework to integrate a variety of motion capabilities, including taskbased force or position control of the end-effector, collision-free global motion for the entire mobile manipulator, and mapping and navigation for the mobile base. We also discuss our efforts in developing perception capabilities targeted to problems in autonomous mobile manipulation. Preliminary experiments on our UMass Mobile Manipulator (UMan) are presented.",
"title": ""
},
{
"docid": "c4332dfb8e8117c3deac7d689b8e259b",
"text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.",
"title": ""
},
{
"docid": "021bed3f2c2f09db1bad7d11108ee430",
"text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26",
"title": ""
}
] |
scidocsrr
|
2d73314495066bec41d937a3150b57ba
|
Mining the Correlation between Lyrical and Audio Features and the Emergence of Mood
|
[
{
"docid": "1ace2a8a8c6b4274ac0891e711d13190",
"text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.",
"title": ""
}
] |
[
{
"docid": "74a2b36d9ed257e7bdb204186953891e",
"text": "Text Summarization solves climacteric problems in furnishing information to the necessities of user. Due to explosive growth of digital data on internet, information floods are the results to the user queries. This makes user impractical to read entire documents and select the desirables. To this problem summarization is a novel approach which surrogates the original document by not deviating from the theme helps the user to find documents easily. Summarization area was broadly spread over different research fields, Natural Language Processing (NLP), Machine Learning and Semantics etc… Summarization is classified mainly into two techniques Abstract and Extract. This article gives a deep review of Abstract summarization techniques.",
"title": ""
},
{
"docid": "4e67d4c9fb2b95bcb40aa7a2d34cbdf2",
"text": "Currently, multiple data vendors utilize the cloud-computing paradigm for trading raw data, associated analytical services, and analytic results as a commodity good. We observe that these vendors often move the functionality of data warehouses to cloud-based platforms. On such platforms, vendors provide services for integrating and analyzing data from public and commercial data sources. We present insights from interviews with seven established vendors about their key challenges with regard to pricing strategies in different market situations and derive associated research problems for the business intelligence community.",
"title": ""
},
{
"docid": "798ee46a8ac10787eaa154861d0311c6",
"text": "In the last few years, we have seen the transformative impact of deep learning in many applications, particularly in speech recognition and computer vision. Inspired by Google's Inception-ResNet deep convolutional neural network (CNN) for image classification, we have developed\"Chemception\", a deep CNN for the prediction of chemical properties, using just the images of 2D drawings of molecules. We develop Chemception without providing any additional explicit chemistry knowledge, such as basic concepts like periodicity, or advanced features like molecular descriptors and fingerprints. We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40,000 compounds. When compared to multi-layer perceptron (MLP) deep neural networks trained with ECFP fingerprints, Chemception slightly outperforms in activity and solvation prediction and slightly underperforms in toxicity prediction. Having matched the performance of expert-developed QSAR/QSPR deep learning models, our work demonstrates the plausibility of using deep neural networks to assist in computational chemistry research, where the feature engineering process is performed primarily by a deep learning algorithm.",
"title": ""
},
{
"docid": "2d4348b42befdc8c02d29617311c6377",
"text": "Research on Smart Grids has recently focused on the energy monitoring issue, with the objective to maximize the user consumption awareness in building contexts on one hand, and to provide a detailed description of customer habits to the utilities on the other. One of the hottest topic in this field is represented by Non-Intrusive Load Monitoring (NILM): it refers to those techniques aimed at decomposing the consumption aggregated data acquired at a single point of measurement into the diverse consumption profiles of appliances operating in the electrical system under study. The focus here is on unsupervised algorithms, which are the most interesting and of practical use in real case scenarios. Indeed, these methods rely on a sustainable amount of a-priori knowledge related to the applicative context of interest, thus minimizing the user intervention to operate, and are targeted to extract all information to operate directly from the measured aggregate data. This paper reports and describes the most promising unsupervised NILM methods recently proposed in the literature, by dividing them into two main categories: load classification and source separation approaches. An overview of the public available dataset used on purpose and a comparative analysis of the algorithms performance is provided, together with a discussion of challenges and future research directions.",
"title": ""
},
{
"docid": "cf56e58dc8bf7ea6e5eb3b6c0ee9a170",
"text": "Ultra-wideband (UWB) radar plays an important role in search and rescue at disaster relief sites. Identifying vital signs and locating buried survivors are two important research contents in this field. In general, it is hard to identify a human's vital signs (breathing and heartbeat) in complex environments due to the low signal-to-noise ratio of the vital sign in radar signals. In this paper, advanced signal-processing approaches are used to identify and to extract human vital signs in complex environments. First, we apply Curvelet transform to remove the source-receiver direct coupling wave and background clutters. Next, singular value decomposition is used to de-noise in the life signals. Finally, the results are presented based on FFT and Hilbert-Huang transform to separate and to extract human vital sign frequencies, as well as the micro-Doppler shift characteristics. The proposed processing approach is first tested by a set of synthetic data generated by FDTD simulation for UWB radar detection of two trapped victims under debris at an earthquake site of collapsed buildings. Then, it is validated by laboratory experiments data. The results demonstrate that the combination of UWB radar as the hardware and advanced signal-processing algorithms as the software has potential for efficient vital sign detection and location in search and rescue for trapped victims in complex environment.",
"title": ""
},
{
"docid": "e0585e397b659507982ff517470d5744",
"text": "Smartphones are exceedingly popular, with the Android platform being no exception. Also, the surge of applications available for such devices has revolutionized our lives, many of which process a significant amount of personal information. Instant Messaging applications are an excellent example of this. In addition to processing this information, there is a high likelihood that they store traces of it in local storage.\n Increasingly, smartphones are involved in law enforcement investigations. They may be found as evidence at the scene of a crime, and require forensic analysis. It has translated into strong demand for Android digital forensics. A critical stage in such an investigation is data acquisition. An investigator must extract the data (in a forensically sound way) before it can be analyzed. This paper provides a survey and analysis of many acquisition methods. In addition, we conduct our own experiment that showcases an excellent acquisition method in practice, and also shows our data analysis methodology as we analyze the private storage of two popular instant messaging applications.",
"title": ""
},
{
"docid": "b515eb759984047f46f9a0c27b106f47",
"text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.",
"title": ""
},
{
"docid": "b2f6c6b4e14824dcd78cdc28547503c8",
"text": "This paper describes the design of digital tracking loops for GPS receivers in a high dynamics environment, without external aiding. We adopted the loop structure of a frequency-locked loop (FLL)-assisted phase-locked loop (PLL) and design it to track accelerations steps, as those occurring in launching vehicles. We used a completely digital model of the loop where the FLL and PLL parts are jointly designed, as opposed to the classical discretized analog model with separately designed FLL and PLL. The new approach does not increase the computational burden. We performed simulations and real RF signal experiments of a fixed-point implementation of the loop, showing that reliable tracking of steps up to 40 g can be achieved",
"title": ""
},
{
"docid": "9dceccb7b171927a5cba5a16fd9d76c6",
"text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.",
"title": ""
},
{
"docid": "93df984beae6626b70d954792f6c012e",
"text": "We show that for any ε > 0, a maximum-weight triangle in an undirected graph with <i>n</i> vertices and real weights assigned to vertices can be found in time O(<i>n</i>ω + <i>n</i><sup>2+ε</sup>), where ω is the exponent of fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(<i>n</i><sup>2.376</sup>). Our algorithm substantially improves the previous time-bounds for this problem recently established by Vassilevska et al. (STOC 2006, O(<i>n</i><sup>2.688</sup>)) and (ICALP 2006, O(<i>n</i><sup>2.575</sup>)). Its asymptotic time complexity matches that of the fastest known algorithm for finding <i>a</i> triangle (not necessarily a maximum-weight one) in a graph.\n By applying or extending our algorithm, we can also improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph established in the papers by Vassilevska et al. For example, we can find a maximum-weight triangle in a vertex-weighted graph with <i>m</i> edges in asymptotic time required by the fastest algorithm for finding <i>any</i> triangle in a graph with <i>m</i> edges, i.e., in time O(<i>m</i><sup>1.41</sup>).",
"title": ""
},
{
"docid": "ad11c058a9a7acfb8c50cd31b259653d",
"text": "We predict credit applications with off-the-shelf, interchangeable black-box classifiers and we explain single predictions with counterfactual explanations. Counterfactual explanations expose the minimal changes required on the input data to obtain a different result e.g., approved vs rejected application. Despite their effectiveness, counterfactuals are mainly designed for changing an undesired outcome of a prediction i.e. loan rejected. Counterfactuals, however, can be difficult to interpret, especially when a high number of features are involved in the explanation. Our contribution is two-fold: i) we propose positive counterfactuals, i.e. we adapt counterfactual explanations to also explain accepted loan applications, and ii) we propose two weighting strategies to generate more interpretable counterfactuals. Experiments on the HELOC loan applications dataset show that our contribution outperforms the baseline counterfactual generation strategy, by leading to smaller and hence more interpretable counterfactuals.",
"title": ""
},
{
"docid": "e51fe5b534af40d32d9525f0f80f1d23",
"text": "With the increasing popularity of virtual currencies, it has become more important to have highly secure devices in which to store private-key information. Furthermore, ARM has made available an extension of processors architectures, designated TrustZone, which allows for the separation of trusted and non-trusted environments, while ensuring the integrity of the OS code. In this paper, we propose the exploitation of this technology to implement a flexible and reliable bitcoin wallet that is more resilient to dictionary and side-channel attacks. Making use of the TrustZone comes with the downside that writing and reading operations become slower, due to the encrypted storage, but we show that cryptographic operations can in fact be executed more efficiently as a result of platform-specific optimizations.",
"title": ""
},
{
"docid": "a64a83791259350d5d76dc1ea097a7fb",
"text": "Today the channels for expressing opinions seem to increase daily. When these opinions are relevant to a company, they are important sources of business insight, whether they represent critical intelligence about a customer's defection risk, the impact of an influential reviewer on other people's purchase decisions, or early feedback on product releases, company news or competitors. Capturing and analyzing these opinions is a necessity for proactive product planning, marketing and customer service and it is also critical in maintaining brand integrity. The importance of harnessing opinion is growing as consumers use technologies such as Twitter to express their views directly to other consumers. Tracking the disparate sources of opinion is hard - but even harder is quickly and accurately extracting the meaning so companies can analyze and act. Tweets' Language is complicated and contextual, especially when people are expressing opinions and requires reliable sentiment analysis based on parsing many linguistic shades of gray. This article argues that using the R programming platform for analyzing tweets programmatically simplifies the task of sentiment analysis and opinion mining. An R programming technique has been used for testing different sentiment lexicons as well as different scoring schemes. Experiments on analyzing the tweets of users over six NHL hockey teams reveals the effectively of using the opinion lexicon and the Latent Dirichlet Allocation (LDA) scoring scheme.",
"title": ""
},
{
"docid": "665f109e8263b687764de476befcbab9",
"text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.",
"title": ""
},
{
"docid": "2f7944399a1f588d1b11d3cf7846af1c",
"text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.",
"title": ""
},
{
"docid": "97382e18c9ca7c42d8b6c908cde761f2",
"text": "In recent years, heatmap regression based models have shown their effectiveness in face alignment and pose estimation. However, Conventional Heatmap Regression (CHR) is not accurate nor stable when dealing with high-resolution facial videos, since it finds the maximum activated location in heatmaps which are generated from rounding coordinates, and thus leads to quantization errors when scaling back to the original high-resolution space. In this paper, we propose a Fractional Heatmap Regression (FHR) for high-resolution video-based face alignment. The proposed FHR can accurately estimate the fractional part according to the 2D Gaussian function by sampling three points in heatmaps. To further stabilize the landmarks among continuous video frames while maintaining the precise at the same time, we propose a novel stabilization loss that contains two terms to address time delay and non-smooth issues, respectively. Experiments on 300W, 300VW and Talking Face datasets clearly demonstrate that the proposed method is more accurate and stable than the state-ofthe-art models. Introduction Face alignment aims to estimate a set of facial landmarks given a face image or video sequence. It is a classic computer vision problem that has attributed to many advanced machine learning algorithms Fan et al. (2018); Bulat and Tzimiropoulos (2017); Trigeorgis et al. (2016); Peng et al. (2015, 2016); Kowalski, Naruniec, and Trzcinski (2017); Chen et al. (2017); Liu et al. (2017); Hu et al. (2018). Nowadays, with the rapid development of consumer hardwares (e.g., mobile phones, digital cameras), High-Resolution (HR) video sequences can be easily collected. Estimating facial landmarks on such highresolution facial data has tremendous applications, e.g., face makeup Chen, Shen, and Jia (2017), editing with special effects Korshunova et al. (2017) in live broadcast videos. However, most existing face alinement methods work on faces with medium image resolutions Chen et al. (2017); Bulat and Tzimiropoulos (2017); Peng et al. (2016); Liu et al. (2017). Therefore, developing face alignment algorithms for high-resolution videos is at the core of this paper. To this end, we propose an accurate and stable algorithm for high-resolution video-based face alignment, named Fractional Heatmap Regression (FHR). It is well known that ∗ indicates equal contributions. Conventional Heatmap Regression (CHR) Loss Fractional Heatmap Regression (FHR) Loss 930 744 411",
"title": ""
},
{
"docid": "dc445d234bafaf115495ce1838163463",
"text": "In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments.",
"title": ""
},
{
"docid": "39ccd0efd846c2314da557b73a326e85",
"text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.",
"title": ""
},
{
"docid": "6af03ef289e32106ba737f2a23b11a4a",
"text": "Based on perceptual and computational attention modeling studies, we formulate measures of saliency for an audiovisual stream. Audio saliency is captured by signal modulations and related multi-frequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. The presence of salient events is signified on this audiovisual curve by geometrical features such as local extrema, sharp transition points and level sets. An audiovisual saliency-based movie summarization algorithm is proposed and evaluated. The algorithm is shown to perform very well in terms of summary informativeness and enjoyability for movie clips of various genres.",
"title": ""
}
] |
scidocsrr
|
643ddb76194257af654dbf50a3792357
|
OBJECTIVE VIDEO QUALITY ASSESSMENT
|
[
{
"docid": "6d0f0c11710945f49cc319b25aa5e9d2",
"text": "A computational approach for analyzing visible textures is described. Textures are modeled as irradiance patterns containing a limited range of spatial frequencies, where mutually distinct textures differ significantly in their dominant characterizing frequencies. By encoding images into multiple narrow spatial frequency and orientation channels, the slowly-varying channel envelopes (amplitude and phase) are used to segregate textural regions of different spatial frequency, orientation, or phase characteristics. Thus, an interpretation of image texture as a region code, or currier of region information, is",
"title": ""
}
] |
[
{
"docid": "65f4b9a23983e3416014167e52cdf064",
"text": "A soft-switching bidirectional dc-dc converter (BDC) with a coupled-inductor and a voltage doubler cell is proposed for high step-up/step-down voltage conversion applications. A dual-active half-bridge (DAHB) converter is integrated into a conventional buck-boost BDC to extend the voltage gain dramatically and decrease switch voltage stresses effectively. The coupled inductor operates not only as a filter inductor of the buck-boost BDC, but also as a transformer of the DAHB converter. The input voltage of the DAHB converter is shared with the output of the buck-boost BDC. So, PWM control can be adopted to the buck-boost BDC to ensure that the voltage on the two sides of the DAHB converter is always matched. As a result, the circulating current and conduction losses can be lowered to improve efficiency. Phase-shift control is adopted to the DAHB converter to regulate the power flows of the proposed BDC. Moreover, zero-voltage switching (ZVS) is achieved for all the active switches to reduce the switching losses. The operational principles and characteristics of the proposed BDC are presented in detail. The analysis and performance have been fully validated experimentally on a 40-60 V/400 V 1-kW hardware prototype.",
"title": ""
},
{
"docid": "e93f4f5c5828a7e82819964bbd29f8d4",
"text": "BACKGROUND\nAlthough hyaluronic acid (HA) specifications such as molecular weight and particle size are fairly well characterized, little information about HA ultrastructural and morphologic characteristics has been reported in clinical literature.\n\n\nOBJECTIVE\nTo examine uniformity of HA structure, the effects of extrusion, and lidocaine dilution of 3 commercially available HA soft-tissue fillers.\n\n\nMATERIALS AND METHODS\nUsing scanning electron microscopy and energy-dispersive x-ray analysis, investigators examined the soft-tissue fillers at various magnifications for ultrastructural detail and elemental distributions.\n\n\nRESULTS\nAll HAs contained oxygen, carbon, and sodium, but with uneven distributions. Irregular particulate matter was present in RES but BEL and JUV were largely particle free. Spacing was more uniform in BEL than JUV and JUV was more uniform than RES. Lidocaine had no apparent effect on morphology; extrusion through a 30-G needle had no effect on ultrastructure.\n\n\nCONCLUSION\nDescriptions of the ultrastructural compositions and nature of BEL, JUV, and RES are helpful for matching the areas to be treated with the HA soft-tissue filler architecture. Lidocaine and extrusion through a 30-G needle exerted no influence on HA structure. Belotero Balance shows consistency throughout the syringe and across manufactured lots.",
"title": ""
},
{
"docid": "f044e06469bfe2bf362d04b69aa52344",
"text": "5G network is anticipated to meet the challenging requirements of mobile traffic in the 2020's, which are characterized by super high data rate, low latency, high mobility, high energy efficiency, and high traffic density. This paper provides an overview of China Mobile's 5G vision and potential solutions. Targeting a paradigm shift to user-centric network operation from the traditional cell-centric operation, 5G radio access network (RAN) design considerations are presented, including RAN restructure, Turbo charged edge, core network (CN) and RAN function repartition, and network slice as a service. Adaptive multiple connections in the user-centric operation is further investigated, where the decoupled downlink and uplink, decoupled control and data, and adaptive multiple connections provide sufficient means to achieve a 5G network with “no more cells.” Software-defined air interface (SDAI) is presented under a unified framework, in which the frame structure, waveform, multiple access, duplex mode, and antenna configuration can be adaptively configured. New paradigm of 5G network featuring user-centric network (UCN) and SDAI is needed to meet the diverse yet extremely stringent requirements across the broad scope of 5G scenarios.",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "01cf7cb5dd78d5f7754e1c31da9a9eb9",
"text": "Today ́s Electronic Industry is changing at a high pace. The root causes are manifold. So world population is growing up to eight billions and gives new challenges in terms of urbanization, mobility and connectivity. Consequently, there will raise up a lot of new business models for the electronic industry. Connectivity will take a large influence on our lives. Concepts like Industry 4.0, internet of things, M2M communication, smart homes or communication in or to cars are growing up. All these applications are based on the same demanding requirement – a high amount of data and increased data transfer rate. These arguments bring up large challenges to the Printed Circuit Board (PCB) design and manufacturing. This paper investigates the impact of different PCB manufacturing technologies and their relation to their high frequency behavior. In the course of the paper a brief overview of PCB manufacturing capabilities is be presented. Moreover, signal losses in terms of frequency, design, manufacturing processes, and substrate materials are investigated. The aim of this paper is, to develop a concept to use materials in combination with optimized PCB manufacturing processes, which allows a significant reduction of losses and increased signal quality. First analysis demonstrate, that for increased signal frequency, demanded by growing data transfer rate, the capabilities to manufacture high frequency PCBs become a key factor in terms of losses. Base materials with particularly high speed properties like very low dielectric constants are used for efficient design of high speed data link lines. Furthermore, copper foils with very low treatment are to be used to minimize loss caused by the skin effect. In addition to the materials composition, the design of high speed circuits is optimized with the help of comprehensive simulations studies. The work on this paper focuses on requirements and main questions arising during the PCB manufacturing process in order to improve the system in terms of losses. For that matter, there are several approaches that can be used. For example, the optimization of the structuring process, the use of efficient interconnection capabilities, and dedicated surface finishing can be used to reduce losses and preserve signal integrity. In this study, a comparison of different PCB manufacturing processes by using measurement results of demonstrators that imitate real PCB applications will be discussed. Special attention has be drawn to the manufacturing capabilities which are optimized for high frequency requirements and focused to avoid signal loss. Different line structures like microstrip lines, coplanar waveguides, and surface integrated waveguides are used for this assessment. This research was carried out by Austria Technologie & Systemtechnik AG (AT&S AG), in cooperation with Vienna University of Technology, Institute of Electrodynamics, Microwave and Circuit Engineering. Introduction Several commercially available PCB fabrication processes exist for manufacturing PCBs. In this paper two methods, pattern plating and panel plating, were utilized for manufacturing the test samples. The first step in both described manufacturing processes is drilling, which allows connections in between different copper layers. The second step for pattern plating (see figure 1) is the flash copper plating process, wherein only a thin copper skin (flash copper) is plated into the drilled holes and over the entire surface. On top of the plated copper a layer of photosensitive etch resist is laminated which is imaged subsequently by ultraviolet (UV) light with a negative film. Negative film imaging is exposing the gaps in between the traces to the UV light. In developing process the non-exposed dry film is removed with a sodium solution. After that, the whole surrounding space is plated with copper and is eventually covered by tin. The tin layer protects the actual circuit pattern during etching. The pattern plating process shows typically a smaller line width tolerance, compared to panel plating, because of a lower copper thickness before etching. The overall process tolerance for narrow dimensions in the order of several tenths of μm is approximately ± 10%. As originally published in the IPC APEX EXPO Conference Proceedings.",
"title": ""
},
{
"docid": "04d9bc52997688b48e70e91a43a145ef",
"text": "Post-weaning social isolation (PSI) has been shown to increase aggressive behavior and alter medial prefrontal cortex (mPFC) function in social species such as rats. Here we developed a novel escapable social interaction test (ESIT) allowing for the quantification of escape and social behaviors in addition to mPFC activation in response to an aggressive or nonaggressive stimulus rat. Male rats were exposed to 3 weeks of PSI (ISO) or group (GRP) housing, and exposed to 3 trials, with either no trial, all trials, or the last trial only with a stimulus rat. Analysis of social behaviors indicated that ISO rats spent less time in the escape chamber and more time engaged in social interaction, aggressive grooming, and boxing than did GRP rats. Interestingly, during the third trial all rats engaged in more of the quantified social behaviors and spent less time escaping in response to aggressive but not nonaggressive stimulus rats. Rats exposed to nonaggressive stimulus rats on the third trial had greater c-fos and ARC immunoreactivity in the mPFC than those exposed to an aggressive stimulus rat. Conversely, a social encounter produced an increase in large PSD-95 punctae in the mPFC independently of trial number, but only in ISO rats exposed to an aggressive stimulus rat. The results presented here demonstrate that PSI increases interaction time and aggressive behaviors during escapable social interaction, and that the aggressiveness of the stimulus rat in a social encounter is an important component of behavioral and neural outcomes for both isolation and group-reared rats.",
"title": ""
},
{
"docid": "4ee641270c1679675a7b563245f41f73",
"text": "MLC STT-MRAM (Multi-level Cell Spin-Transfer Torque Magnetic RAM), an emerging non-volatile memory technology, has become a promising candidate to construct L2 caches for high-end embedded processors. However, the long write latency limits the effectiveness of MLC STT-MRAM based L2 caches. In this paper, we address this limitation with two novel designs: Line Pairing (LP) and Line Swapping (LS). LP forms fast cachelines by re-organizing MLC soft bits which are faster to write. LS dynamically stores frequently written data into these fast cachelines. Our experimental results show that LP and LS improve system performance by 15% and reduce energy consumption by 21%.",
"title": ""
},
{
"docid": "881b8f167ea9d9d943a48a9d3f6c1264",
"text": "This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation.",
"title": ""
},
{
"docid": "252f7393393a7ef16eda8388d601ef00",
"text": "In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. In this frame, background subtraction (BS) method is a well-known method in video processing and it is based on frame differencing. The basic idea is to subtract the current frame from a background image and to classify each pixel either as foreground or background by comparing the difference with a threshold. Therefore, the moving object is detected and tracked by using frame differencing and by learning an updated background model. In addition, simulated annealing (SA) is an optimization technique for soft computing in the artificial intelligence area. The p-median problem is a basic model of discrete location theory of operational research (OR) area. It is a NP-hard combinatorial optimization problem. The main aim in the p-median problem is to find p number facility locations, minimize the total weighted distance between demand points (nodes) and the closest facilities to demand points. The SA method is used to solve the p-median problem as a probabilistic metaheuristic. In this paper, an SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos. The SA modification to the BS method (SA–BS) is proposed in this study to determine the optimal threshold for the foreground-background (i.e., bi-level) segmentation and to learn background model for object detection. At these segmentation and learning stages, all of the optimization problems considered in this study are taken as p-median problems. Performances of SA–BS and regular BS methods are measured using four videoclips. Therefore, these results are evaluated quantitatively as the overall results of the given method. The obtained performance results and statistical analysis (i.e., Wilcoxon median test) show that our proposed method is more preferable than regular BS method. Meanwhile, the contribution of this",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "e9f05136c60328f8b87cf51621c93a4b",
"text": "Accurate and timely detection of weeds between and within crop rows in the early growth stage is considered one of the main challenges in site-specific weed management (SSWM). In this context, a robust and innovative automatic object-based image analysis (OBIA) algorithm was developed on Unmanned Aerial Vehicle (UAV) images to design early post-emergence prescription maps. This novel algorithm makes the major contribution. The OBIA algorithm combined Digital Surface Models (DSMs), orthomosaics and machine learning techniques (Random Forest, RF). OBIA-based plant heights were accurately estimated and used as a feature in the automatic sample selection by the RF classifier; this was the second research contribution. RF randomly selected a class balanced training set, obtained the optimum features values and classified the image, requiring no manual training, making this procedure time-efficient and more accurate, since it removes errors due to a subjective manual task. The ability to discriminate weeds was significantly affected by the imagery spatial resolution and weed density, making the use of higher spatial resolution images more suitable. Finally, prescription maps for in-season post-emergence SSWM were created based on the weed maps—the third research contribution—which could help farmers in decision-making to optimize crop management by rationalization of the herbicide application. The short time involved in the process (image capture and analysis) would allow timely weed control during critical periods, crucial for preventing yield loss.",
"title": ""
},
{
"docid": "4cd8a9f4dbe713be59b540968b5114f7",
"text": "ConvNets and ImageNet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement combined with the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases question the reliability of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. The contribution of this study is threefold. We first experimentally demonstrate that the accuracy and robustness of ConvNets measured on ImageNet are vastly underestimated. Next, we show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user. We finally introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a valuable tool both for improving our understanding of ConvNets’ predictions and for designing more reliable models.",
"title": ""
},
{
"docid": "b1599614c7d91462d05d35808d7e2983",
"text": "Hyponatremia and hypernatremia are complex clinical problems that occur frequently in full term newborns and in preterm infants admitted to the Neonatal Intensive Care Unit (NICU) although their real frequency and etiology are incompletely known. Pathogenetic mechanisms and clinical timing of hypo-hypernatremia are well known in adult people whereas in the newborn is less clear how and when hypo-hypernatremia could alter cerebral osmotic equilibrium and after how long time brain cells adapt themselves to the new hypo-hypertonic environment. Aim of this review is to present a practical approach and management of hypo-hypernatremia in newborns, especially in preterms.",
"title": ""
},
{
"docid": "05941fa5fe1d7728d9bce44f524ff17f",
"text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219",
"title": ""
},
{
"docid": "dfe82129fd128cc2e42f9ed8b3efc9c7",
"text": "In this paper we present a new lossless image compression algorithm. To achieve the high compression speed we use a linear prediction, modified Golomb–Rice code family, and a very fast prediction error modeling method. We compare the algorithm experimentally with others for medical and natural continuous tone grayscale images of depths of up to 16 bits. Its results are especially good for big images, for natural images of high bit depths, and for noisy images. The average compression speed on Intel Xeon 3.06 GHz CPU is 47 MB/s. For big images the speed is over 60MB/s, i.e., the algorithm needs less than 50 CPU cycles per byte of image.",
"title": ""
},
{
"docid": "d9471b93ddb5cedfeebd514f9ed6f9af",
"text": "Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.",
"title": ""
},
{
"docid": "44618874fe7725890fbfe9fecde65853",
"text": "Software development teams in large scale offshore enterprise development programmes are often under intense pressure to deliver high quality software within challenging time contraints. Project failures can attract adverse publicity and damage corporate reputations. Agile methods have been advocated to reduce project risks, improving both productivity and product quality. This article uses practitioner descriptions of agile method tailoring to explore large scale offshore enterprise development programmes with a focus on product owner role tailoring, where the product owner identifies and prioritises customer requirements. In globalised projects, the product owner must reconcile competing business interests, whilst generating and then prioritising large numbers of requirements for numerous development teams. The study comprises eight international companies, based in London, Bangalore and Delhi. Interviews with 46 practitioners were conducted between February 2010 and May 2012. Grounded theory was used to identify that product owners form into teams. The main contribution of this research is to describe the nine product owner team functions identified: groom, prioritiser, release master, technical architect, governor, communicator, traveller, intermediary and risk assessor. These product owner functions arbitrate between conflicting customer requirements, approve release schedules, disseminate architectural design decisions, provide technical governance and propogate information across teams. The functions identified in this research are mapped to a scrum of scrums process, and a taxonomy of the functions shows how focusing on either decision-making or information dissemination in each helps to tailor agile methods to large scale offshore enterprise development programmes.",
"title": ""
},
{
"docid": "422183692a08138189271d4d7af407c7",
"text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.",
"title": ""
}
] |
scidocsrr
|
288bb9b51e2d6cf4ee6c7fbcffc650e8
|
Research Note - Gamification of Technology-Mediated Training: Not All Competitions Are the Same
|
[
{
"docid": "f4641f1aa8c2553bb41e55973be19811",
"text": "this paper focuses on employees’ e-learning processes during online job training. A new categorization of self-regulated learning strategies, that is, personal versus social learning strategies, is proposed, and measurement scales are developed. the new measures were tested using data collected from employees in a large company. Our approach provides context-relevant insights into online training providers and employees themselves. the results suggest that learners adopt different self-regulated learning strategies resulting in different e-learning outcomes. Furthermore, the use of self-regulated learning strategies is influenced by individual factors such as virtual competence and goal orientation, and job and contextual factors such as intellectual demand and cooperative norms. the findings can (1) help e-learners obtain better learning outcomes through their active use of varied learning strategies, (2) provide useful information for organizations that are currently using or plan to use e-learning 308 WAN, COMPEAu, AND hAggErty for training, and (3) inform software designers to integrate self-regulated learning strategy support in e-learning system design and development. Key WorDs anD phrases: e-learning, job training, learning outcomes, learning processes, self-regulated learning strategies, social cognitive theory. employee training has beCome an effeCtive Way to enhance organizational productivity. It is even more important today given the fast-changing nature of current work practices. research has shown that 50 percent of all employee skills become outdated within three to five years [67]. the cycle is even shorter for information technology (It) professionals because of the high rate of technology innovation. On the one hand, this phenomenon requires organizations to focus more on building internal capabilities by providing different kinds of job preparation and training. On the other hand, it suggests that a growing number of employees are seeking learning opportunities to regularly upgrade their skills and competencies. Consequently, demand is growing for ongoing research to determine optimal training approaches with real performance impact. unlike traditional courses provided by educational institutions that are focused on fundamental and relatively stable knowledge, corporate training programs must be developed within short time frames because their content quickly becomes outdated. Furthermore, for many large organizations, especially multinationals with constantly growing and changing global workforces, the management of training and learning has become increasingly complex. Difficulties arise due to the wide range of courses, the high volume of course materials, the coordination of training among distributed work locations with the potential for duplicated training services, the need to satisfy varied individual learning requests and competency levels, and above all, the need to contain costs while deriving value from training expenditures. the development of information systems (IS) has contributed immensely to solving workplace training problems. E-learning has emerged as a cost-effective way to deliver training at convenient times to a large number of employees in different locations. E-learning, defined as a virtual learning environment in which learners’ interactions with learning materials, peers, and instructors are mediated through Its, has become the fastest-growing form of education [4]. the American Society for training and Development found that even with the challenges of the recent economic crisis, u.S. organizations spent $134.07 billion on employee learning and development in 2008 [74], and earlier evidence suggested that close to 40 percent of training was delivered using e-learning technologies [73]. E-learning has been extended from its original application in It skill training to common business skill training, including management, leadership, communication, customer service, quality management, and human resource skills. Despite heavy investments in e-learning technologies, however, recent research suggests that organizations have not received the level of benefit from e-learning that was E-lEArNINg OutCOMES IN OrgANIZAtIONAl SEttINgS 309 originally anticipated [62]. One credible explanation has emerged from educational psychology showing that learners are neither motivated nor well prepared for the new e-learning environment [14]. Early IS research on e-learning focused on the technology design aspects of e-learning but has subsequently broadened to include all aspects of e-learning inputs (participant characteristics, technology design, instructional strategies), processes (psychological processes, learning behaviors), and outcomes (learning outcomes) [4, 55, 76]. however, less IS research has focused on the psychological processes users engage in that improve or limit their e-learning outcomes [76]. In this research, we contribute to the understanding of e-learning processes by bridging two bodies of literature, that is, self-regulated learning (Srl) in educational psychology and e-learning in IS research. More specifically, we focus on two research questions: RQ1: How do learners’ different e‐learning processes (e.g., using different SRL strategies) influence their learning outcomes? RQ2: How is a learner’s use of SRL strategies influenced by individual and con‐ textual factors salient within a business context? to address the first question, we extend prior research on Srl and propose a new conceptualization that distinguishes two types of Srl strategies: personal Srl strategies, such as self‐evaluation and goal setting and planning, for managing personally directed forms of learning; and social Srl strategies, such as seeking peer assistance and social comparison, for managing social-oriented forms of learning. Prior research (e.g., [64, 88]) suggests that the use of Srl strategies in general can improve learning outcomes. We propose to explore, describe, and measure a new type of Srl strategy—social Srl strategy—and to determine if it has an equally important influence on learning outcomes as the more widely studied personal Srl strategy. We theorize that both types of Srl strategies are influential during the learning process and expect they have different effects on e-learning outcomes. to examine the role of Srl strategies in e-learning, we situated the new constructs in a nomological network based on prior research [76]. this led to our second research question, which also deals more specifically with e-learning in business organizations. While research conducted in educational institutions can definitely inform business training practices, differences in the business context such as job requirements and competitive pressures may affect e-learning outcomes. From prior research we selected four antecedent factors that we hypothesize to be important influences on individual use of Srl strategies (both personal and our newly proposed social strategies). the first two are individual factors. learners’ goal orientation refers to the individual’s framing of the activity as either a performance or a mastery activity, where the former is associated with flawless performance and the latter is associated with developing capability [28]. Virtual competence, the second factor, reflects the individual’s capability to function in a virtual environment [78]. We also include two contextual factors that are particularly applicable to organizational settings: the intellectual demands of learners’ jobs and the group norms perceived by learners about cooperation among work group members. 310 WAN, COMPEAu, AND hAggErty In summary, this study contributes to e-learning research by focusing on adult learners’ Srl processes in job training contexts. It expands the nomological network of e-learning by identifying and elaborating social Srl strategy as an additional form of Srl strategy that is distinct from personal Srl strategy. We further test how different types of Srl strategies applied by learners during the e-learning process affect three types of e-learning outcomes. Our results suggest that learners using different Srl strategies achieve different learning outcomes and learners’ attributes and contextual factors do matter. theoretical background Social Cognitive theory and Self-regulation learning is the proCess of aCquiring, enhanCing, or moDifying an individual’s knowledge, skills, and values [39]. In this study, we apply social cognitive theory to investigate e-learning processes in organizational settings. Self-regulation is a distinctive feature of social cognitive theory and plays a central role in the theory’s application [56]. It refers to a set of principles and practices by which people monitor their own behaviors and consciously adjust those behaviors in pursuit of personal goals [8]. Srl is thus a proactive way of learning in which people manage their own learning processes. research has shown that self-regulated learners (i.e., individuals who intentionally manage their learning processes) can learn better than non-selfregulated learners in traditional academic and organizational training settings because they view learning as a systematic and controllable process and are willing to take greater responsibility for their learning [30, 64, 88, 92, 93]. the definition of Srl as the degree to which individuals are metacognitively, motivationally, and behaviorally active participants in their own learning process is an integration of previous research on learning strategies, metacognitive monitoring, self-concept perceptions, volitional strategies, and self-control [86, 89]. According to this conceptualization, Srl is a combination of three subprocesses: metacognitive processes, which include planning and organizing during learning; motivational processes, which include self-evaluation and self-consequences at various stages; and behavioral processes, which include sele",
"title": ""
}
] |
[
{
"docid": "eea49870d2ddd24a42b8b245edbb1fc0",
"text": "In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing. In video compressive sensing one frame is acquired using a set of coded masks (sensing matrix) from which a number of video frames, equal to the number of coded masks, is reconstructed. The proposed framework is an endto-end model where the sensing matrix is trained along with the video reconstruction. The encoder maps a video block to compressive measurements by learning the binary elements of the sensing matrix. The decoder is trained to map the measurements from a video patch back to a video block via several hidden layers of a Multi-Layer Perceptron network. The predicted video blocks are stacked together to recover the unknown video sequence. The reconstruction performance is found to improve when using the trained sensing mask from the network as compared to other mask designs such as random, across a wide variety of compressive sensing reconstruction algorithms. Finally, our analysis and discussion offers insights into understanding the characteristics of the trained mask designs that lead to the improved reconstruction quality.",
"title": ""
},
{
"docid": "e769f52b6e10ea1cf218deb8c95f4803",
"text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "22a5c41441519d259d3be70a9413f1f5",
"text": "In this paper, a 3-degrees-of-freedom parallel manipulator developed by Tsai and Stamper known as the Maryland manipulator is considered. In order to provide dynamic analysis, three different sequential trajectories are taken into account. Two different control approaches such as the classical proportional-integral-derivative (PID) and fractional-order PID control are used to improve the tracking performance of the examined manipulator. Parameters of the controllers are determined by using pattern search algorithm and mathematical methods for the classical PID and fractional-order PID controllers, respectively. Design procedures for both controllers are given in detail. Finally, the corresponding results are compared. Performance analysis for both of the proposed controllers is confirmed by simulation results. It is observed that not only transient but also steady-state error values have been reduced with the aid of the PIλDμ controller for tracking control purpose. According to the obtained results, the fractional-order PIλDμ controller is more powerful than the optimally tuned PID for the Maryland manipulator tracking control. The main contribution of this paper is to determine the control action with the aid of the fractional-order PI λDμ controller different from previously defined controller structures. The determination of correct and accurate control action has great importance when high speed, high acceleration, and high accuracy needed for the trajectory tracking control of parallel mechanisms present unique challenges.",
"title": ""
},
{
"docid": "1c058d6a648b2190500340f762eeff78",
"text": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.",
"title": ""
},
{
"docid": "9a842e6c42c1fdd6af3885370d50005f",
"text": "Text classification is a fundamental problem in natural language processing. As a popular deep learning model, convolutional neural network(CNN) has demonstrated great success in this task. However, most existing CNN models apply convolution filters of fixed window size, thereby unable to learn variable n-gram features flexibly. In this paper, we present a densely connected CNN with multi-scale feature attention for text classification. The dense connections build short-cut paths between upstream and downstream convolutional blocks, which enable the model to compose features of larger scale from those of smaller scale, and thus produce variable n-gram features. Furthermore, a multi-scale feature attention is developed to adaptively select multi-scale features for classification. Extensive experiments demonstrate that our model obtains competitive performance against state-of-the-art baselines on six benchmark datasets. Attention visualization further reveals the model’s ability to select proper n-gram features for text classification. Our code is available at: https://github.com/wangshy31/DenselyConnected-CNN-with-Multiscale-FeatureAttention.git.",
"title": ""
},
{
"docid": "db9887ea5f96cd4439ca95ad3419407c",
"text": "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photo-consistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras.",
"title": ""
},
{
"docid": "26c259c7b6964483d13a85938a11cf53",
"text": "In Natural Language Processing (NLP), research results from software engineering and software technology have often been neglected. This paper describes some factors that add complexity to the task of engineering reusable NLP systems (beyond conventional software systems). Current work in the area of design patterns and composition languages is described and claimed relevant for natural language processing. The benefits of NLP componentware and barriers to reuse are outlined, and the dichotomies “system versus experiment” and “toolkit versus framework” are discussed. It is argued that in order to live up to its name language engineering must not neglect component quality and architectural evaluation when reporting new NLP research.",
"title": ""
},
{
"docid": "ef1f5eaa9c6f38bbe791e512a7d89dab",
"text": "Lexical-semantic verb classifications have proved useful in supporting various natural language processing (NLP) tasks. The largest and the most widely deployed classification in English is Levin’s (1993) taxonomy of verbs and their classes. While this resource is attractive in being extensive enough for some NLP use, it is not comprehensive. In this paper, we present a substantial extension to Levin’s taxonomy which incorporates 57 novel classes for verbs not covered (comprehensively) by Levin. We also introduce 106 novel diathesis alternations, created as a side product of constructing the new classes. We demonstrate the utility of our novel classes by using them to support automatic subcategorization acquisition and show that the resulting extended classification has extensive coverage over the English verb lexicon.",
"title": ""
},
{
"docid": "7cff04976bf78c5d8a1b4338b2107482",
"text": "Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.",
"title": ""
},
{
"docid": "13db8fe917d303f942fcfb544440ec24",
"text": "In many types of information systems, users face an implicit tradeoff between disclosing personal information and receiving benefits, such as discounts by an electronic commerce service that requires users to divulge some personal information. While these benefits are relatively measurable, the value of privacy involved in disclosing the information is much less tangible, making it hard to design and evaluate information systems that manage personal information. Meanwhile, existing methods to assess and measure the value of privacy, such as self-reported questionnaires, are notoriously unrelated of real eworld behavior. To overcome this obstacle, we propose a methodology called VOPE (Value of Privacy Estimator), which relies on behavioral economics' Prospect Theory (Kahneman & Tversky, 1979) and valuates people's privacy preferences in information disclosure scenarios. VOPE is based on an iterative and responsive methodology in which users take or leave a transaction that includes a component of information disclosure. To evaluate the method, we conduct an empirical experiment (n 1⁄4 195), estimating people's privacy valuations in electronic commerce transactions. We report on the convergence of estimations and validate our results by comparing the values to theoretical projections of existing results (Tsai, Egelman, Cranor, & Acquisti, 2011), and to another independent experiment that required participants to rank the sensitivity of information disclosure transactions. Finally, we discuss how information systems designers and regulators can use VOPE to create and to oversee systems that balance privacy and utility. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "92008a84a80924ec8c0ad1538da2e893",
"text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.",
"title": ""
},
{
"docid": "4f81901c2269cd4561dd04f59a04a473",
"text": "The advent of powerful acid-suppressive drugs, such as proton pump inhibitors (PPIs), has revolutionized the management of acid-related diseases and has minimized the role of surgery. The major and universally recognized indications for their use are represented by treatment of gastro-esophageal reflux disease, eradication of Helicobacter pylori infection in combination with antibiotics, therapy of H. pylori-negative peptic ulcers, healing and prophylaxis of non-steroidal anti-inflammatory drug-associated gastric ulcers and control of several acid hypersecretory conditions. However, in the last decade, we have witnessed an almost continuous growth of their use and this phenomenon cannot be only explained by the simple substitution of the previous H2-receptor antagonists, but also by an inappropriate prescription of these drugs. This endless increase of PPI utilization has created an important problem for many regulatory authorities in terms of increased costs and greater potential risk of adverse events. The main reasons for this overuse of PPIs are the prevention of gastro-duodenal ulcers in low-risk patients or the stress ulcer prophylaxis in non-intensive care units, steroid therapy alone, anticoagulant treatment without risk factors for gastro-duodenal injury, the overtreatment of functional dyspepsia and a wrong diagnosis of acid-related disorder. The cost for this inappropriate use of PPIs has become alarming and requires to be controlled. We believe that gastroenterologists together with the scientific societies and the regulatory authorities should plan educational initiatives to guide both primary care physicians and specialists to the correct use of PPIs in their daily clinical practice, according to the worldwide published guidelines.",
"title": ""
},
{
"docid": "f5f56d680fbecb94a08d9b8e5925228f",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.",
"title": ""
},
{
"docid": "497d6e0bf6f582924745c7aa192579e7",
"text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.",
"title": ""
},
{
"docid": "54af3c39dba9aafd5b638d284fd04345",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
},
{
"docid": "318a4af201ed3563443dcbe89c90b6b4",
"text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "0edc89fbf770bbab2fb4d882a589c161",
"text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.",
"title": ""
},
{
"docid": "8d7a7bc2b186d819b36a0a8a8ba70e39",
"text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.",
"title": ""
}
] |
scidocsrr
|
b13e26646a7575a805f7e39f7e066122
|
Beyond Bitcoin: The Rise of Blockchain World
|
[
{
"docid": "ce9487df62f75872d7111a26972feca7",
"text": "In this chapter we provide an overview of the concept of blockchain technology and its potential to disrupt the world of banking through facilitating global money remittance, smart contracts, automated banking ledgers and digital assets. In this regard, we first provide a brief overview of the core aspects of this technology, as well as the second-generation contract-based developments. From there we discuss key issues that must be considered in developing such ledger based technologies in a banking context.",
"title": ""
}
] |
[
{
"docid": "a13a302e7e2fd5e09a054f1bf23f1702",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "a60aa2af59270f76d5f3da719186d769",
"text": "There has been much recent discussion on what distribution systems can and should look like in the future. Terms related to this discussion include smart grid, distribution system of the future, and others. Functionally, a smart grid should be able to provide new abilities such as self-healing, high reliability, energy management, and real-time pricing. From a design perspective, a smart grid will likely incorporate new technologies such as advanced metering, automation, communication, distributed generation, and distributed storage. This paper discussed the potential impact that issues related to smart grid will have on distribution system design.",
"title": ""
},
{
"docid": "aa23ee34f7117f6d5f83374b8623f4dc",
"text": "PURPOSE OF REVIEW\nThe notion that play may facilitate learning has long been touted. Here, we review how video game play may be leveraged for enhancing attentional control, allowing greater cognitive flexibility and learning and in turn new routes to better address developmental disorders.\n\n\nRECENT FINDINGS\nVideo games, initially developed for entertainment, appear to enhance the behavior in domains as varied as perception, attention, task switching, or mental rotation. This surprisingly wide transfer may be mediated by enhanced attentional control, allowing increased signal-to-noise ratio and thus more informed decisions.\n\n\nSUMMARY\nThe possibility of enhancing attentional control through targeted interventions, be it computerized training or self-regulation techniques, is now well established. Embedding such training in video game play is appealing, given the astounding amount of time spent by children and adults worldwide with this media. It holds the promise of increasing compliance in patients and motivation in school children, and of enhancing the use of positive impact games. Yet for all the promises, existing research indicates that not all games are created equal: a better understanding of the game play elements that foster attention and learning as well as of the strategies developed by the players is needed. Computational models from machine learning or developmental robotics provide a rich theoretical framework to develop this work further and address its impact on developmental disorders.",
"title": ""
},
{
"docid": "ef345b834b801a36b88d3f462f7c2a0e",
"text": "At the global level of the Big Five, Extraversion and Neuroticism are the strongest predictors of life satisfaction. However, Extraversion and Neuroticism are multifaceted constructs that combine more specific traits. This article examined the contribution of facets of Extraversion and Neuroticism to life satisfaction in four studies. The depression facet of Neuroticism and the positive emotions/cheerfulness facet of Extraversion were the strongest and most consistent predictors of life satisfaction. These two facets often accounted for more variance in life satisfaction than Neuroticism and Extraversion. The findings suggest that measures of depression and positive emotions/cheerfulness are necessary and sufficient to predict life satisfaction from personality traits. The results also lead to a more refined understanding of the specific personality traits that influence life satisfaction: Depression is more important than anxiety or anger and a cheerful temperament is more important than being active or sociable.",
"title": ""
},
{
"docid": "159fcd866264df1d4c100f4da32d93b6",
"text": "Understanding the correlation between two different scores for the same set of items is a common problem in graph analysis and information retrieval. The most commonly used statistics that quantifies this correlation is Kendall's tau; however, the standard definition fails to capture that discordances between items with high rank are more important than those between items with low rank. Recently, a new measure of correlation based on average precision has been proposed to solve this problem, but like many alternative proposals in the literature it assumes that there are no ties in the scores. This is a major deficiency in a number of contexts, and in particular when comparing centrality scores on large graphs, as the obvious baseline, indegree, has a very large number of ties in social networks and web graphs. We propose to extend Kendall's definition in a natural way to take into account weights in the presence of ties. We prove a number of interesting mathematical properties of our generalization and describe an O(n\\log n) algorithm for its computation. We also validate the usefulness of our weighted measure of correlation using experimental data on social networks and web graphs.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "846931a1e4c594626da26931110c02d6",
"text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.",
"title": ""
},
{
"docid": "54bdabea83e86d21213801c990c60f4d",
"text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.",
"title": ""
},
{
"docid": "cd1a9d210cc1c7b544fe81d4a3a31250",
"text": "Data mining is vast field for mining knowledge in various fields of life. Crime mining is one of the applications focused here. Credit card and web based crime are increasingly as more technologies are rising high. To deal and overcome fraud clustering and classification techniques are implemented. Framework and process models are designed that provide user results, graphs and trees that help user to find criminals without any complex computation.",
"title": ""
},
{
"docid": "47cf10951d13e1da241a5551217aa2d5",
"text": "Despite the widespread adoption of building information modelling (BIM) for the design and lifecycle management of new buildings, very little research has been undertaken to explore the value of BIM in the management of heritage buildings and cultural landscapes. To that end, we are investigating the construction of BIMs that incorporate both quantitative assets (intelligent objects, performance data) and qualitative assets (historic photographs, oral histories, music). Further, our models leverage the capabilities of BIM software to provide a navigable timeline that chronicles tangible and intangible changes in the past and projections into the future. In this paper, we discuss three projects undertaken by the authors that explore an expanded role for BIM in the documentation and conservation of architectural heritage. The projects range in scale and complexity and include: a cluster of three, 19th century heritage buildings in the urban core of Toronto, Canada; a 600 hectare village in rural, south-eastern Ontario with significant modern heritage value, and a proposed web-centered BIM database for materials and methods of construction specific to heritage conservation.",
"title": ""
},
{
"docid": "450401c2092f881e26210e27d01d6195",
"text": "This article describes what should typically be included in the introduction, method, results, and discussion sections of a meta-analytic review. Method sections include information on literature searches, criteria for inclusion of studies, and a listing of the characteristics recorded for each study. Results sections include information describing the distribution of obtained effect sizes, central tendencies, variability, tests of significance, confidence intervals, tests for heterogeneity, and contrasts (univariate or multivariate). The interpretation of meta-analytic results is often facilitated by the inclusion of the binomial effect size display procedure, the coefficient of robustness, file drawer analysis, and, where overall results are not significant, the counternull value of the obtained effect size and power analysis.",
"title": ""
},
{
"docid": "ad9f3510ffaf7d0bdcf811a839401b83",
"text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.",
"title": ""
},
{
"docid": "9c5535f218f6228ba6b2a8e5fdf93371",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
},
{
"docid": "f07d714a45f26e6b80c6a315e50cdf92",
"text": "X-ray images are the essential aiding means all along in clinical diagnosis of fracture. So the processing and analysis of X-ray fracture images is particularly important. Extracting the features of X-ray images is a very important process in classifying fracture images according to the principle of AO classification of fractures. A proposed algorithm is used in this paper. First, use marker-controlled watershed transform based on gradient and homotopy modification to segment X-ray fracture images. Then the features consisted of region number, region area, region centroid and protuberant polygon of fracture image are extracted by marker processing and regionprops function. Next we use Hough transform to detect and extract lines in the protuberant polygon of X-ray fracture image. The lines are consisted of fracture line and parallel lines of centerline. Through the parallel lines of centerline, we obtain centerline over centroid and perpendicular line of centerline over centroid. Finally compute the angle between fracture line and perpendicular line of centerline. This angle can be used to classify femur backbone fracture.",
"title": ""
},
{
"docid": "3e357c91292ba1e1055fc3a493aba4eb",
"text": "The study of online social networks has attracted increasing interest. However, concerns are raised for the privacy risks of user data since they have been frequently shared among researchers, advertisers, and application developers. To solve this problem, a number of anonymization algorithms have been recently developed for protecting the privacy of social graphs. In this article, we proposed a graph node similarity measurement in consideration with both graph structure and descriptive information, and a deanonymization algorithm based on the measurement. Using the proposed algorithm, we evaluated the privacy risks of several typical anonymization algorithms on social graphs with thousands of nodes from Microsoft Academic Search, LiveJournal, and the Enron email dataset, and a social graph with millions of nodes from Tencent Weibo. Our results showed that the proposed algorithm was efficient and effective to deanonymize social graphs without any initial seed mappings. Based on the experiments, we also pointed out suggestions on how to better maintain the data utility while preserving privacy.",
"title": ""
},
{
"docid": "3bc34f3ef98147015e2ad94a6c615348",
"text": "Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/. Keywords— Image quality assessment, perceptual quality, human visual system, error sensitivity, structural similarity, structural information, image coding, JPEG, JPEG2000",
"title": ""
},
{
"docid": "982ee984dda5930b025ac93749c3cf3f",
"text": "We present an application for the simulation of errors in storage systems. The software is completely parameterizable in order to simulate different types of disk errors and disk array configurations. It can be used to verify and optimize error correction schemes for storage. Realistic simulation of disk errors is a complex task as many test rounds need to be performed in order to characterize the performance of an algorithm based on highly sporadic errors under a large variety of parameters. The software allows different levels of abstraction to perform quick tests for rough estimations as well as detailed configurations for more realistic but complex simulation runs. We believe that this simulation software is the first one that is able to cover a complete range of disk error types in many commonly used disk array configurations.",
"title": ""
},
{
"docid": "fbd00a26883954ba0ef290efdc777e9e",
"text": "A century of revolutionary growth in aviation has made global travel a reality of daily life. Aircraft and air transport overcame a number of formidable challenges and hostilities in the physical world. Success in this arduous pursuit was not without leveraging advances of the “cyber” layer, i.e., digital computing, data storage and networking, and software, in hardware, infrastructures, humans, and processes, within the airframe, in space, and on the ground. The physical world, however, is evolving continuously in the 21st century, contributing traffic growth and diversity, fossil fuel and ozone layer depletion, demographics and economy dynamics, as some major factors in aviation performance equations. In the next 100 years, apart from breakthrough physical advances, such as aircraft structural and electrical designs, we envision aviation's progress will depend on conquering cyberspace challenges and adversities, while safely and securely transitioning cyber benefits to the physical world. A tight integration of cyberspace with the physical world streamlines this vision. This paper proposes a novel cyber-physical system (CPS) framework to understand the cyber layer and cyber-physical interactions in aviation, study their impacts, and identify valuable research directions. This paper presents CPS challenges and solutions for aircraft, aviation users, airports, and air traffic management.",
"title": ""
},
{
"docid": "76a7f7688238fb4c0d2dd2f817194302",
"text": "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users’ political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral users – groups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.",
"title": ""
},
{
"docid": "bb75aa9bbe07e635493b123eaaadf74d",
"text": "Right ventricular (RV) pacing increases the incidence of atrial fibrillation (AF) and hospitalization rate for heart failure. Many patients with sinus node dysfunction (SND) are implanted with a DDDR pacemaker to ensure the treatment of slowly conducted atrial fibrillation and atrioventricular (AV) block. Many pacemakers are never reprogrammed after implantation. This study aims to evaluate the effectiveness of programming DDIR with a long AV delay in patients with SND and preserved AV conduction as a possible strategy to reduce RV pacing in comparison with a nominal DDDR setting including an AV search hysteresis. In 61 patients (70 ± 10 years, 34 male, PR < 200 ms, AV-Wenckebach rate at ≥130 bpm) with symptomatic SND a DDDR pacemaker was implanted. The cumulative prevalence of right ventricular pacing was assessed according to the pacemaker counter in the nominal DDDR-Mode (AV delay 150/120 ms after atrial pacing/sensing, AV search hysteresis active) during the first postoperative days and in DDIR with an individually programmed long fixed AV delay after 100 days (median). With the nominal DDDR mode the median incidence of right ventricular pacing amounted to 25.2%, whereas with DDIR and long AV delay the median prevalence of RV pacing was significantly reduced to 1.1% (P < 0.001). In 30 patients (49%) right ventricular pacing was almost completely (<1%) eliminated, n = 22 (36%) had >1% <20% and n = 4 (7%) had >40% right ventricular pacing. The median PR interval was 161 ms. The median AV interval with DDIR was 280 ms. The incidence of right ventricular pacing in patients with SND and preserved AV conduction, who are treated with a dual chamber pacemaker, can significantly be reduced by programming DDIR with a long, individually adapted AV delay when compared with a nominal DDDR setting, but nonetheless in some patients this strategy produces a high proportion of disadvantageous RV pacing. The DDIR mode with long AV delay provides an effective strategy to reduce unnecessary right ventricular pacing but the effect has to be verified in every single patient.",
"title": ""
}
] |
scidocsrr
|
6445df4e797c314577660898b19e0b73
|
Convolution by Evolution: Differentiable Pattern Producing Networks
|
[
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
}
] |
[
{
"docid": "28a9a0d096fa469ed00934336edd3331",
"text": "The new generation of field programmable gate array (FPGA) technologies enables an embedded processor intellectual property (IP) and an application IP to be integrated into a system-on-a-programmable-chip (SoPC) developing environment. Therefore, this study presents a speed control integrated circuit (IC) for permanent magnet synchronous motor (PMSM) drive under this SoPC environment. First, the mathematic model of PMSM is defined and the vector control used in the current loop of PMSM drive is explained. Then, an adaptive fuzzy controller adopted to cope with the dynamic uncertainty and external load effect in the speed loop of PMSM drive is proposed. After that, an FPGA-based speed control IC is designed to realize the controllers. The proposed speed control IC has two IPs, a Nios II embedded processor IP and an application IP. The Nios II processor is used to develop the adaptive fuzzy controller in software due to the complicated control algorithm and low sampling frequency control (speed control: 2 kHz). The designed application IP is utilized to implement the current vector controller in hardware owing to the requirement for high sampling frequency control (current loop: 16 kHz, pulsewidth modulation circuit: 4-8 MHz) but simple computation. Finally, an experimental system is set up and some experimental results are demonstrated.",
"title": ""
},
{
"docid": "dd956cadc4158b6529cca0966c446845",
"text": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.",
"title": ""
},
{
"docid": "ca0bad00d17cab8301820745a1377f29",
"text": "With the evolution of conventional VANETs (Vehicle Ad-hoc Networks) into the IoV (Internet of Vehicles), vehicle-based spatial crowdsourcing has become a potential solution for crowdsourcing applications. In vehicular networks, a spatial-temporal task/question can be outsourced (i.e., task/question relating to a particular location and in a specific time period) to some suitable smart vehicles (also known as workers) and then these workers can help solve the task/question. However, an inevitable barrier to the widespread deployment of spatial crowdsourcing applications in vehicular networks is the concern of privacy. Hence, We propose a novel privacy-friendly spatial crowdsourcing scheme. Unlike the existing schemes, the proposed scheme considers the privacy issue from a new perspective according that the spatial-temporal tasks can be linked and analyzed to break the location privacy of workers. Specifically, to address the challenge, three privacy requirements (i.e. anonymity, untraceability, and unlinkability) are defined and the proposed scheme combines an efficient anonymous technique with a new composite privacy metric to protect against attackers. Detailed privacy analyses show that the proposed scheme is privacy-friendly. In addition, performance evaluations via extensive simulations are also conducted, and the results demonstrate the efficiency and effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "87fa3f2317b53520839bc3cb90cf291b",
"text": "In an experimental study of language switching and selection, bilinguals named numerals in either their first or second language unpredictably. Response latencies (RTs) on switch trials (where the response language changed from the previous trial) were slower than on nonswitch trials. As predicted, the language-switching cost was consistently larger when switching to the dominant L 1 from the weaker L2 than vice versa such that, on switch trials, L 1 responses were slower than in L 2. This “paradoxical” asymmetry in the cost of switching languages is explained in terms of differences in relative strength of the bilingual’s two languages and the involuntary persistence of the previous language set across an intended switch of language. Naming in the weaker language, L 2, requires active inhibition or suppression of the stronger competitor language, L 1; the inhibition persists into the following (switch) trial in the form of “negative priming” of the L 1 lexicon as a whole. © 1999 Academic Press",
"title": ""
},
{
"docid": "4f84d3a504cf7b004a414346bb19fa94",
"text": "Abstract—The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.",
"title": ""
},
{
"docid": "d86eb65183f059a4ca7cb0ad9190a0ca",
"text": "Different short circuits, load growth, generation shortage, and other faults which disturb the voltage and frequency stability are serious threats to the system security. The frequency and voltage instability causes dispersal of a power system into sub-systems, and leads to blackout as well as heavy damages of the system equipment. This paper presents a fast and optimal adaptive load shedding method, for isolated power system using Artificial Neural Networks (ANN). The proposed method is able to determine the necessary load shedding in all steps simultaneously and is much faster than conventional methods. This method has been tested on the New-England power system. The simulation results show that the proposed algorithm is fast, robust and optimal values of load shedding in different loading scenarios are obtained in comparison with conventional method.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "c9fc426722df72b247093779ad6e2c0e",
"text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.",
"title": ""
},
{
"docid": "7c12f0dfccc4d4c22e180b4a515612bd",
"text": "VIJAYA COLLEGE Page 1 Database Management Systems",
"title": ""
},
{
"docid": "f62ea522062fb48860c98140d746ab23",
"text": "Feature selection is widely used in preparing high-dimensional data for effective data mining. The explosive popularity of social media produces massive and high-dimensional data at an unprecedented rate, presenting new challenges to feature selection. Social media data consists of (1) traditional high-dimensional, attribute-value data such as posts, tweets, comments, and images, and (2) linked data that provides social context for posts and describes the relationships between social media users as well as who generates the posts, and so on. The nature of social media also determines that its data is massive, noisy, and incomplete, which exacerbates the already challenging problem of feature selection. In this article, we study a novel feature selection problem of selecting features for social media data with its social context. In detail, we illustrate the differences between attribute-value data and social media data, investigate if linked data can be exploited in a new feature selection framework by taking advantage of social science theories. We design and conduct experiments on datasets from real-world social media Web sites, and the empirical results demonstrate that the proposed framework can significantly improve the performance of feature selection. Further experiments are conducted to evaluate the effects of user--user and user--post relationships manifested in linked data on feature selection, and research issues for future work will be discussed.",
"title": ""
},
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "bc807811f3aefdd15ff338bf80c10225",
"text": "HOGENBOOM. 1986. Pollen selection in breeding glasshouse tomatoes for low energy conditions, pp. 125-130. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SARI-GORLA, M. C. FROVA, AND R. REDAELLI. 1986. Extent ofgene expression at the gametophytic phase in maize, pp. 27-32. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SEARCY, K., AND D. MULCAHY. 1986. Gametophytic expression of heavy metal tolerance, pp. 159-164. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology of Pollen. Springer-Verlag, N.Y. SHfYANNA, K. R., AND J. HESLOP-HARRISON. 1981. Membrane state and pollen viability. J. Ann. Bot. 47:759-766. SIMON, J., AND J. C. SANFORD. 1986. Induction of gametic selection in situ by stylar application of selective agents, pp. 107-112. In D. Mulcahy, G. Bergamini Mulcahy, and E. Ottaviano (eds.), Biotechnology and Ecology ofPollen. Springer-Verlag, N.Y. SNOW, A. A. 1986. Pollination dynamics in Epilobium canum (Onagraceae): Consequences for gametophytic selection. Amer. J. Bot. 73:139-151.",
"title": ""
},
{
"docid": "98c72706e0da844c80090c1ed5f3abeb",
"text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.",
"title": ""
},
{
"docid": "d5cc92aad3e7f1024a514ff4e6379c86",
"text": "This chapter describes the convergence of two of the most influential technologies in the last decade, namely business intelligence (BI) and the Semantic Web (SW). Business intelligence is used by almost any enterprise to derive important business-critical knowledge from both internal and (increasingly) external data. When using external data, most often found on the Web, the most important issue is knowing the precise semantics of the data. Without this, the results cannot be trusted. Here, Semantic Web technologies come to the rescue, as they allow semantics ranging from very simple to very complex to be specified for any web-available resource. SW technologies do not only support capturing the “passive” semantics, but also support active inference and reasoning on the data. The chapter first presents a motivating running example, followed by an introduction to the relevant SW foundation concepts. The chapter then goes on to survey the use of SW technologies for data integration, including semantic DOI: 10.4018/978-1-61350-038-5.ch014",
"title": ""
},
{
"docid": "6c64e7ca2e41a6eb70fe39747b706bf8",
"text": "Network Functions Virtualization (NFV) has enabled operators to dynamically place and allocate resources for network services to match workload requirements. However, unbounded end-to-end (e2e) latency of Service Function Chains (SFCs) resulting from distributed Virtualized Network Function (VNF) deployments can severely degrade performance. In particular, SFC instantiations with inter-data center links can incur high e2e latencies and Service Level Agreement (SLA) violations. These latencies can trigger timeouts and protocol errors with latency-sensitive operations.\n Traditional solutions to reduce e2e latency involve physical deployment of service elements in close proximity. These solutions are, however, no longer viable in the NFV era. In this paper, we present our solution that bounds the e2e latency in SFCs and inter-VNF control message exchanges by creating micro-service aggregates based on the affinity between VNFs. Our system, Contain-ed, dynamically creates and manages affinity aggregates using light-weight virtualization technologies like containers, allowing them to be placed in close proximity and hence bounding the e2e latency. We have applied Contain-ed to the Clearwater IP Multimedia System and built a proof-of-concept. Our results demonstrate that, by utilizing application and protocol specific knowledge, affinity aggregates can effectively bound SFC delays and significantly reduce protocol errors and service disruptions.",
"title": ""
},
{
"docid": "6eca055c09966b85aca19012d9967ee0",
"text": "The Penn Treebank, in its eight years of operation (1989-1996), produced approximately 7 million words of part-of-speech tagged text, 3 million words of skeletally parsed text, over 2 million words of text parsed for predicateargument structure, and 1.6 million words of transcribed spoken text annotated for speech disfluencies. This paper describes the design of the three annotation schemes used by the Treebank: POS tagging, syntactic bracketing, and disfluency annotation and the methodology employed in production. All available Penn Treebank materials are distributed by the Linguistic Data Consortium http://www.ldc.upenn.edu.",
"title": ""
},
{
"docid": "bcb70572c1e4a7ddf89e72a7b998f479",
"text": "An (m, n, k, λa, λc) optical orthogonal signature pattern code (OOSPC) is a family C of m × n (0, 1)-matrices of Hamming weight k satisfying two correlation properties. OOSPCs find application in transmitting two-dimensional image through multicore fiber in CDMA networks. Let Θ(m, n, k, λa, λc) denote the largest possible number of codewords among all (m, n, k, λa, λc)-OOSPCs. An (m, n, k, λa, λc)-OOSPCwithΘ(m, n, k, λa, λc) codewords is said to be maximum. For the case λa = λc = λ, the notations (m, n, k, λa, λc)-OOSPC and Θ(m, n, k, λa, λc) can be briefly written as (m, n, k, λ)-OOSPC and Θ(m, n, k, λ). In this paper, some direct constructions for (3, n, 4, 1)-OOSPCs, which are based on skew starters and an application of the Theorem of Weil on multiplicative character sums, are given for some positive integer n. Several recursive constructions for (m, n, k, 1)-OOSPCs are presented by means of incomplete different matrices and group divisible designs. By utilizing those constructions, the number of the codewords of a maximum (m, n, 4, 1)OOSPC is determined for any positive integers m, n such that gcd(m, 18) = 3 and n ≡ 0 (mod 12). It is established that Θ(m, n, 4, 1) = (mn − 12)/12 for any positive integers m, n such that gcd(m, 18) = 3 and n ≡ 0 (mod 12). © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5455dea4ef1b6864740eedf0587e6bfd",
"text": "The ‘will, skill, tool’ model is a well-established theoretical framework that elucidates the conditions under which teachers are most likely to employ information and communication technologies (ICT) in the classroom. Past studies have shown that these three factors explain a very high degree of variance in the frequency of classroom ICT use. The present study replicates past findings using a different set of measures and hones in on possible subfactors. Furthermore, the study examines teacher affiliation for constructivist-style teaching, which is often considered to facilitate the pedagogical use of digital media. The study’s survey of 357 Swiss secondary school teachers reveals significant positive correlations between will, skill, and tool variables and the combined frequency and diversity of technology use in teaching. A multiple linear regression model was used to identify relevant subfactors. Five factors account for a total of 60% of the explained variance in the intensity of classroom ICT use. Computer and Internet applications are more often used by teachers in the classroom when: (1) teachers consider themselves to be more competent in using ICT for teaching; (2) more computers are readily available; (3) the teacher is a form teacher and responsible for the class; (4) the teacher is more convinced that computers improve student learning; and (5) the teacher more often employs constructivist forms of teaching and learning. The impact of constructivist teaching was small, however. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "52c9d8a1bf6fabbe0771eef75a64c1d8",
"text": "This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.",
"title": ""
}
] |
scidocsrr
|
8fe3e1cc772d1b40d7f05384341d7b98
|
Independent motion detection with event-driven cameras
|
[
{
"docid": "609cc8dd7323e817ddfc5314070a68bf",
"text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.",
"title": ""
}
] |
[
{
"docid": "cbae4d5eb347a8136f34fb370d28f46b",
"text": "Available online 18 November 2013",
"title": ""
},
{
"docid": "3f98deae1ccf36f9758958ee785bb294",
"text": "The Thrombolysis In Myocardial Infarction (TIMI) risk score predicts adverse clinical outcomes in patients with non-ST-elevation acute coronary syndromes (NSTEACS). Whether this score correlates with the coronary anatomy is unknown. We sought to determine whether the TIMI risk score correlates with the angiographic extent and severity of coronary artery disease (CAD) in patients with NSTEACS undergoing cardiac catheterization. We conducted a retrospective review of 688 consecutive medical records of patients who underwent coronary angiography secondary to NSTEACS. Patients were classified into 3 categories according to TIMI risk score: TIMI scores 0 to 2 (n = 284), 3 to 4 (n = 301), and 5 to 7 (n = 103). One-vessel disease was found in patients with TIMI score 3 to 4 as often as in patients with TIMI score 0 to 2 (odds ratio [OR] 1.08, 95% confidence interval [CI] 0.74 to 1.56; p = 0.66). However, 1-vessel disease was found more often in patients with TIMI score 3 to 4 than in patients with TIMI score 5 to 7 (OR 2.16, 95% CI 1.18 to 3.95; p = 0.01), and in patients with TIMI score 0 to 2 than in those with TIMI score 5 to 7 (OR 1.99, 95% CI 1.08 to 3.66; p = 0.02). Two-vessel disease was more likely found in patients with TIMI score 3 to 4 than in those with TIMI scores 0 to 2 (OR 3.96, 95% CI 2.41 to 6.53; p <0.001) and 5 to 7 (OR 2.05, 95% CI 1.12 to 3.75; p = 0.004). Three-vessel or left main disease was more likely found in patients with TIMI score 3 to 4 than in patients with TIMI score 0 to 2 (OR 3.19, 95% CI 2.00 to 5.10; p <0.001), and in patients with TIMI score 5 to 7 than in patients with TIMI score 3 to 4 (OR 6.34, 95% CI 3.88 to 10.36; p <0.001). In patients with NSTEACS undergoing cardiac catheterization, the TIMI risk score correlated with the extent and severity of CAD.",
"title": ""
},
{
"docid": "37f14a10e08cbb4d4034d19a7d3bf24e",
"text": "Development of Mobile handset applications, new standard for cellular networks have been defined. In this Paper, author intend to propose a Novel mobile Antenna that can cover more of LTE (Long Term Evolution) Bands (4G cellular networks). The proposed antenna uses structure of planar monopole antenna. Bandwidth of antenna is 0.87-0.99 GHz, 1.65-3.14 GHz and has high efficiency unlike the previous structures. The dimension of the antenna is 18mm×21mm and has FR4 substrate by 1.5mm thickness that is very compact antenna respect to the other expressed antenna.",
"title": ""
},
{
"docid": "b947bfe4a4cd38b880ae96ad607479c1",
"text": "In order to solve the emergency decision management problem with uncertainty, an Emergency Bayesian decision network (EBDN) model is used in this paper. By computing the probability of each node, the EBDN can solve the uncertainty of different response measures. Using Gray system theory to determine the weight of all kinds of emergency losses. And then use genetic algorithm to search the best combination measure by comparing the value of output loss. For illustration, a typhoon example is utilized to show the feasibility of EBDN model. Empirical results show that the EBDN model can combine expert's knowledge and historic data to predict expected effects under different combinations of response measures, and then choose the best one. The proposed EBDN model can combine the decision process into a diagrammatic form, and thus the uncertainty of emergency events in solving emergency dynamic decision making is solved.",
"title": ""
},
{
"docid": "c1694750a148296c8b907eb6d1a86074",
"text": "A field experiment was carried out to implement a remote sensing energy balance (RSEB) algorithm for estimating the incoming solar radiation (Rsi), net radiation (Rn), sensible heat flux (H), soil heat flux (G) and latent heat flux (LE) over a drip-irrigated olive (cv. Arbequina) orchard located in the Pencahue Valley, Maule Region, Chile (35 ̋251S; 71 ̋441W; 90 m above sea level). For this study, a helicopter-based unmanned aerial vehicle (UAV) was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI) and surface temperature (Tsurface) at very high resolution (6 cm ˆ 6 cm). Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon). The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE) and mean absolute error (MAE) for LE were 50 and 43 W m ́2 while those for H were 56 and 46 W m ́2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m ́2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.",
"title": ""
},
{
"docid": "55a37995369fe4f8ddb446d83ac0cecf",
"text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.",
"title": ""
},
{
"docid": "770e08dc6a56019d3420a82d9f0e4ea8",
"text": "This paper studies how close random graphs are typically to their expectations. We interpret this question through the concentration of the adjacency and Laplacian matrices in the spectral norm. We study inhomogeneous Erdös-Rényi random graphs on n vertices, where edges form independently and possibly with different probabilities pij . Sparse random graphs whose expected degrees are o(logn) fail to concentrate; the obstruction is caused by vertices with abnormally high and low degrees. We show that concentration can be restored if we regularize the degrees of such vertices, and one can do this in various ways. As an example, let us reweight or remove enough edges to make all degrees bounded above by O(d) where d = maxnpij . Then we show that the resulting adjacency matrix A ′ concentrates with the optimal rate: ‖A′ − EA‖ = O( √ d). Similarly, if we make all degrees bounded below by d by adding weight d/n to all edges, then the resulting Laplacian concentrates with the optimal rate: ‖L(A′)−L(EA′)‖ = O(1/ √ d). Our approach is based on Grothendieck-Pietsch factorization, using which we construct a new decomposition of random graphs. These results improve and considerably simplify the recent work of E. Levina and the authors. We illustrate the concentration results with an application to the community detection problem in the analysis of networks.",
"title": ""
},
{
"docid": "4c2f9f9681a1d3bc6d9a27a59c2a01d6",
"text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "ed0d2151f5f20a233ed8f1051bc2b56c",
"text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.",
"title": ""
},
{
"docid": "c4e94803ae52dbbf4ac58831ff381467",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study.",
"title": ""
},
{
"docid": "df896e48cb4b5a364006b3a8e60a96ac",
"text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely",
"title": ""
},
{
"docid": "bc4a72d96daf03f861b187fa73f57ff6",
"text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.",
"title": ""
},
{
"docid": "74acfe91e216c8494b7304cff03a8c66",
"text": "Diagnostic accuracy of the talar tilt test is not well established in a chronic ankle instability (CAI) population. Our purpose was to determine the diagnostic accuracy of instrumented and manual talar tilt tests in a group with varied ankle injury history compared with a reference standard of self-report questionnaire. Ninety-three individuals participated, with analysis occurring on 88 (39 CAI, 17 ankle sprain copers, and 32 healthy controls). Participants completed the Cumberland Ankle Instability Tool, arthrometer inversion talar tilt tests (LTT), and manual medial talar tilt stress tests (MTT). The ability to determine CAI status using the LTT and MTT compared with a reference standard was performed. The sensitivity (95% confidence intervals) of LTT and MTT was low [LTT = 0.36 (0.23-0.52), MTT = 0.49 (0.34-0.64)]. Specificity was good to excellent (LTT: 0.72-0.94; MTT: 0.78-0.88). Positive likelihood ratio (+ LR) values for LTT were 1.26-6.10 and for MTT were 2.23-4.14. Negative LR for LTT were 0.68-0.89 and for MTT were 0.58-0.66. Diagnostic odds ratios ranged from 1.43 to 8.96. Both clinical and arthrometer laxity testing appear to have poor overall diagnostic value for evaluating CAI as stand-alone measures. Laxity testing to assess CAI may only be useful to rule in the condition.",
"title": ""
},
{
"docid": "626408161aa06de1cb50253094d4d8f8",
"text": "In this communication, a corporate stacked microstrip and substrate integrated waveguide (SIW) feeding structure is reported to be used to broaden the impedance bandwidth of a <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> patch array antenna. The proposed array antenna is based on a multilayer printed circuit board structure containing two dielectric substrates and four copper cladding layers. The radiating elements, which consist of slim rectangular patches with surrounding U-shaped parasitic patches, are located on the top layer. Every four radiation elements are grouped together as a <inline-formula> <tex-math notation=\"LaTeX\">$2 \\times 2$ </tex-math></inline-formula> subarray and fed by a microstrip power divider on the next copper layer through metalized blind vias. Four such subarrays are corporate-fed by an SIW feeding network underneath. The design process and analysis of the array antenna are discussed. A prototype of the proposed array antenna is fabricated and measured, showing a good agreement between the simulation and measurement, thus validating the correctness of the design. The measured results indicate that the proposed array antenna exhibits a wide <inline-formula> <tex-math notation=\"LaTeX\">$\\vert \\text {S}_{11}\\vert < -10$ </tex-math></inline-formula> dB bandwidth of 17.7%, i.e., 25.3–30.2 GHz, a peak gain of 16.4 dBi, a high radiation efficiency above 80%, and a good orthogonal polarization discrimination of higher than 30 dB. In addition, the use of low-profile substrate in the SIW feeding network makes this array antenna easier to be integrated directly with millimeter-wave front-end integrated circuits. The demonstrated array antenna can be a good candidate for various <italic>Ka</italic>-band wireless applications, such as 5G, satellite communications and so on.",
"title": ""
},
{
"docid": "e3a766bad255bc3f4ad095cece45c637",
"text": "We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images. These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER. To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities). We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token. The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms.",
"title": ""
},
{
"docid": "ee81c38d65c6ff2988c5519c77ffb13e",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i",
"title": ""
},
{
"docid": "13ac4474f01136b2603f2b7ee9eedf19",
"text": "Teamwork is best achieved when members of the team understand one another. Human-robot collaboration poses a particular challenge to this goal due to the differences between individual team members, both mentally/computationally and physically. One way in which this challenge can be addressed is by developing explicit models of human teammates. Here, we discuss, compare and contrast the many techniques available for modeling human cognition and behavior, and evaluate their benefits and drawbacks in the context of human-robot collaboration.",
"title": ""
},
{
"docid": "832a208d5f0e0c9d965bf6037d002bb3",
"text": "Littering constitutes a major societal problem, and any simple intervention that reduces its prevalence would be widely beneficial. In previous research, we have found that displaying images of watching eyes in the environment makes people less likely to litter. Here, we investigate whether the watching eyes images can be transferred onto the potential items of litter themselves. In two field experiments on a university campus, we created an opportunity to litter by attaching leaflets that either did or did not feature an image of watching eyes to parked bicycles. In both experiments, the watching eyes leaflets were substantially less likely to be littered than control leaflets (odds ratios 0.22-0.32). We also found that people were less likely to litter when there other people in the immediate vicinity than when there were not (odds ratios 0.04-0.25) and, in one experiment but not the other, that eye leaflets only reduced littering when there no other people in the immediate vicinity. We suggest that designing cues of observation into packaging could be a simple but fruitful strategy for reducing littering.",
"title": ""
},
{
"docid": "d1f24e3461ae9bcf9bece544f1ed3bd2",
"text": "The goal of this study was to examine the mediating role of negative emotions in the link between academic stress and Internet addiction among Korean adolescents. We attempted to extend the general strain theory to Internet addiction by exploring psychological pathways from academic stress to Internet addiction using a national and longitudinal panel study. A total of 512 adolescents completed self-reported scales for academic stress, negative emotions, and Internet addiction. We found that academic stress was positively associated with negative emotions and Internet addiction, and negative emotions were positively associated with Internet addiction. Further, the results of structural equation modeling revealed that adolescents’ academic stress had indirectly influenced Internet addiction through negative emotions. The results of this study suggest that adolescents who experience academic stress might be at risk for Internet addiction, particularly when accompanied with negative emotions. These findings provided significant implications for counselors and policymakers to prevent adolescents’ Internet addiction, and extended the general strain theory to Internet addiction which is typically applicable to deviant behavior. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
bba5691f82803fc083a5294303957d67
|
Image-Based Pointing and Tracking for Inertially Stabilized Airborne Camera Platform
|
[
{
"docid": "feee488a72016554ebf982762d51426e",
"text": "Optical imaging sensors, such as television or infrared cameras, collect information about targets or target regions. It is thus necessary to control the sensor's line-of-sight (LOS) to achieve accurate pointing. Maintaining sensor orientation toward a target is particularly challenging when the imaging sensor is carried on a mobile vehicle or when the target is highly dynamic. Controlling an optical sensor LOS with an inertially stabilized platform (ISP) can meet these challenges.A target tracker is a process, typically involving image processing techniques, for detecting targets in optical imagery. This article describes the use and design of ISPs and target trackers for imaging optical sensors.",
"title": ""
}
] |
[
{
"docid": "caa30379a2d0b8be2e1b4ddf6e6602c2",
"text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).",
"title": ""
},
{
"docid": "ffbe5d7219abcb5f7cef4be54302e3a0",
"text": "Modern medical care is influenced by two paradigms: 'evidence-based medicine' and 'patient-centered medicine'. In the last decade, both paradigms rapidly gained in popularity and are now both supposed to affect the process of clinical decision making during the daily practice of physicians. However, careful analysis shows that they focus on different aspects of medical care and have, in fact, little in common. Evidence-based medicine is a rather young concept that entered the scientific literature in the early 1990s. It has basically a positivistic, biomedical perspective. Its focus is on offering clinicians the best available evidence about the most adequate treatment for their patients, considering medicine merely as a cognitive-rational enterprise. In this approach the uniqueness of patients, their individual needs and preferences, and their emotional status are easily neglected as relevant factors in decision-making. Patient-centered medicine, although not a new phenomenon, has recently attracted renewed attention. It has basically a humanistic, biopsychosocial perspective, combining ethical values on 'the ideal physician', with psychotherapeutic theories on facilitating patients' disclosure of real worries, and negotiation theories on decision making. It puts a strong focus on patient participation in clinical decision making by taking into account the patients' perspective, and tuning medical care to the patients' needs and preferences. However, in this approach the ideological base is better developed than its evidence base. In modern medicine both paradigms are highly relevant, but yet seem to belong to different worlds. The challenge for the near future is to bring these separate worlds together. The aim of this paper is to give an impulse to this integration. Developments within both paradigms can benefit from interchanging ideas and principles from which eventually medical care will benefit. In this process a key role is foreseen for communication and communication research.",
"title": ""
},
{
"docid": "a9621ae83268a372b2220030c4022a9e",
"text": "A 15-50-GHz two-port quasi-optical scalar network analyzer consisting of a transmitter and receiver built in a planar technology is presented. The network analyzer is based on a Schottky-diode multiplier and mixer integrated inside a planar antenna and fed differentially by a coplanar waveguide transmission line. The antenna is placed on an extended hemispherical high-resistivity silicon substrate lens. The local oscillator signal is swept from 3 to 5 GHz and high-order harmonic mixing in both the up- and down-conversion mode is used to realize the RF bandwidth. The network analyzer has a dynamic range of >;50 dB in a 1-kHz bandwidth, and was successfully used to measure frequency-selective surfaces with f0=20, 30, and 40 GHz and a second-order bandpass response. Furthermore, the system was built with circuits and components for easy scaling to millimeter-wave frequencies, which is the primary motivation for this work.",
"title": ""
},
{
"docid": "5fa0bc1f4a7f9573e90790d751bbfc6d",
"text": "The online shopping is increasingly being accepted Internet users, which reflects the online shopping convenient, fast, efficient and economic advantage. Online shopping, personal information security is a major problem in the Internet. This article summarizes the characteristics of online shopping and the current development of the main safety problems, and make online shopping related security measures and transactions.",
"title": ""
},
{
"docid": "c929a8b6ff4d654a488b5e189b2b61dc",
"text": "Human neural progenitors derived from pluripotent stem cells develop into electrophysiologically active neurons at heterogeneous rates, which can confound disease-relevant discoveries in neurology and psychiatry. By combining patch clamping, morphological and transcriptome analysis on single-human neurons in vitro, we defined a continuum of poor to highly functional electrophysiological states of differentiated neurons. The strong correlations between action potentials, synaptic activity, dendritic complexity and gene expression highlight the importance of methods for isolating functionally comparable neurons for in vitro investigations of brain disorders. Although whole-cell electrophysiology is the gold standard for functional evaluation, it often lacks the scalability required for disease modeling studies. Here, we demonstrate a multimodal machine-learning strategy to identify new molecular features that predict the physiological states of single neurons, independently of the time spent in vitro. As further proof of concept, we selected one of the potential neurophysiological biomarkers identified in this study—GDAP1L1—to isolate highly functional live human neurons in vitro.",
"title": ""
},
{
"docid": "c635f2ad65cd74c137910661aeb0ab3d",
"text": "Scholarly research on the topic of leadership has witnessed a dramatic increase over the last decade, resulting in the development of diverse leadership theories. To take stock of established and developing theories since the beginning of the new millennium, we conducted an extensive qualitative review of leadership theory across 10 top-tier academic publishing outlets that included The Leadership Quarterly, Administrative Science Quarterly, American Psychologist, Journal of Management, Academy of Management Journal, Academy of Management Review, Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Organizational Science, and Personnel Psychology. We then combined two existing frameworks (Gardner, Lowe, Moss, Mahoney, & Cogliser, 2010; Lord & Dinh, 2012) to provide a processoriented framework that emphasizes both forms of emergence and levels of analysis as a means to integrate diverse leadership theories. We then describe the implications of the findings for future leadership research and theory.",
"title": ""
},
{
"docid": "d80a58ef393c1f311a829190d7981853",
"text": "With the increasing numbers of Cloud Service Providers and the migration of the Grids to the Cloud paradigm, it is necessary to be able to leverage these new resources. Moreover, a large class of High Performance Computing (hpc) applications can run these resources without (or with minor) modifications. But using these resources come with the cost of being able to interact with these new resource providers. In this paper we introduce the design of a hpc middleware that is able to use resources coming from an environment that compose of multiple Clouds as well as classical hpc resources. Using the Diet middleware, we are able to deploy a large-scale, distributed hpc platform that spans across a large pool of resources aggregated from different providers. Furthermore, we hide to the end users the difficulty and complexity of selecting and using these new resources even when new Cloud Service Providers are added to the pool. Finally, we validate the architecture concept through cosmological simulation ramses. Thus we give a comparison of 2 well-known Cloud Computing Software: OpenStack and OpenNebula. Key-words: Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmology ∗ ENS de Lyon, France, Email: FirstName.LastName@ens-lyon.fr † ENSI de Bourges, France, Email: FirstName.LastName@ensi-bourges.fr ‡ INRIA, France, Email: FirstName.LastName@inria.fr Comparison on OpenStack and OpenNebula performance to improve multi-Cloud architecture on cosmological simulation use case Résumé : Avec l’augmentation du nombre de fournisseurs de service Cloud et la migration des applications depuis les grilles de calcul vers le Cloud, il est ncessaire de pouvoir tirer parti de ces nouvelles ressources. De plus, une large classe des applications de calcul haute performance peuvent s’excuter sur ces ressources sans modifications (ou avec des modifications mineures). Mais utiliser ces ressources vient avec le cot d’tre capable d’intragir avec des nouveaux fournisseurs de ressources. Dans ce papier, nous introduisons la conception d’un nouveau intergiciel hpc qui permet d’utiliser les ressources qui proviennent d’un environement compos de plusieurs Clouds comme des ressources classiques. En utilisant l’intergiciel Diet, nous sommes capable de dployer une plateforme hpc distribue et large chelle qui s’tend sur un large ensemble de ressources aggrges entre plusieurs fournisseurs Cloud. De plus, nous cachons l’utilisateur final la difficult et la complexit de slectionner et d’utiliser ces nouvelles ressources quand un nouveau fournisseur de service Cloud est ajout dans l’ensemble. Finalement, nous validons notre concept d’architecture via une application de simulation cosmologique ramses. Et nous fournissons une comparaison entre 2 intergiciels de Cloud: OpenStack et OpenNebula. Mots-clés : Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmologie Comparaison de performance entre OpenStack et OpenNebula et les architectures multi-Cloud: Application la cosmologie.3",
"title": ""
},
{
"docid": "205fbeb5b52bbc85c2b434931f49d6fe",
"text": "The Psychological Inventory of Criminal Thinking Styles (PICTS) is an 80-item self-report measure designed to assess crime-supporting cognitive patterns. Data from men (N = 450) and women (N = 227) offenders indicate that the PICTS thinking, validity, and content scales possess moderate to moderately high internal consistency and test-retest stability. Meta-analyses of studies in which the PICTS has been administered reveal that besides correlating with measures of past criminality, several of the PICTS thinking and content scales are capable of predicting future adjustment/release outcome at a low but statistically significant level, and two scales (En, CUR) are sensitive to program-assisted change beyond what control subjects achieve spontaneously. The factor structure of the PICTS is then examined with the aid of exploratory and confirmatory factor analysis, the results of which denote the presence of two major and two minor factors.",
"title": ""
},
{
"docid": "4261755b137a5cde3d9f33c82bc53cd7",
"text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.",
"title": ""
},
{
"docid": "9c510d7ddeb964c5d762d63d9e284f44",
"text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "de22f2f15cc427b50d4018e8c44df7e4",
"text": "In this paper we examine challenges identified with participatory design research in the developing world and develop the postcolonial notion of cultural hybridity as a sensitizing concept. While participatory design intentionally addresses power relationships, its methodology does not to the same degree cover cultural power relationships, which extend beyond structural power and voice. The notion of cultural hybridity challenges the static cultural binary opposition between the self and the other, Western and non-Western, or the designer and the user---offering a more nuanced approach to understanding the malleable nature of culture. Drawing from our analysis of published literature in the participatory design community, we explore the complex relationship of participatory design to international development projects and introduce postcolonial cultural hybridity via postcolonial theory and its application within technology design thus far. Then, we examine how participatory approaches and cultural hybridity may interact in practice and conclude with a set of sensitizing insights and topics for further discussion in the participatory design community.",
"title": ""
},
{
"docid": "7e08ddffc3a04c6dac886e14b7e93907",
"text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.",
"title": ""
},
{
"docid": "10bd4900b81375e0d89b202cb5a01e4b",
"text": "We introduce a network that directly predicts the 3D layout of lanes in a road scene from a single image. This work marks a first attempt to address this task with onboard sensing instead of relying on pre-mapped environments. Our network architecture, 3D-LaneNet, applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. The intranetwork IPM projection facilitates a dual-representation information flow in both regular image-view and top-view. An anchor-per-column output representation enables our endto-end approach replacing common heuristics such as clustering and outlier rejection. In addition, our approach explicitly handles complex situations such as lane merges and splits. Promising results are shown on a new 3D lane synthetic dataset. For comparison with existing methods, we verify our approach on the image-only tuSimple lane detection benchmark and reach competitive performance.",
"title": ""
},
{
"docid": "8e64738b0d21db1ec5ef0220507f3130",
"text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"title": ""
},
{
"docid": "ad04cbfece29e39b28a6f83d1072c19e",
"text": "Correlation and regression are different, but not mutually exclusive, techniques. Roughly, regression is used for prediction (which does not extrapolate beyond the data used in the analysis) whereas correlation is used to determine the degree of association. There situations in which the x variable is not fixed or readily chosen by the experimenter, but instead is a random covariate to the y variable. This paper shows the relationships between the coefficient of determination, the multiple correlation coefficient, the covariance, the correlation coefficient and the coefficient of alienation, for the case of two related variables x and y. It discusses the uses of the correlation coefficient r , either as a way to infer correlation, or to test linearity. A number of graphical examples are provided as well as examples of actual chemical applications. The paper recommends the use of z Fisher transformation instead of r values because r is not normally distributed but z is (at least in approximation). For either correlation or for regression models, the same expressions are valid, although they differ significantly in meaning.",
"title": ""
},
{
"docid": "24b5c8aee05ac9be61d9217a49e3d3b0",
"text": "People have different intents in using online platforms. They may be trying to accomplish specific, short-term goals, or less well-defined, longer-term goals. While understanding user intent is fundamental to the design and personalization of online platforms, little is known about how intent varies across individuals, or how it relates to their behavior. Here, we develop a framework for understanding intent in terms of goal specificity and temporal range. Our methodology combines survey-based methodology with an observational analysis of user activity. Applying this framework to Pinterest, we surveyed nearly 6000 users to quantify their intent, and then studied their subsequent behavior on the web site. We find that goal specificity is bimodal – users tend to be either strongly goal-specific or goalnonspecific. Goal-specific users search more and consume less content in greater detail than goal-nonspecific users: they spend more time using Pinterest, but are less likely to return in the near future. Users with short-term goals are also more focused and more likely to refer to past saved content than users with long-term goals, but less likely to save content for the future. Further, intent can vary by demographic, and with the topic of interest. Last, we show that user’s intent and activity are intimately related by building a model that can predict a user’s intent for using Pinterest after observing their activity for only two minutes. Altogether, this work shows how intent can be predicted from user behavior.",
"title": ""
},
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "3a3c0c21d94c2469bd95a103a9984354",
"text": "Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In [18], the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.",
"title": ""
},
{
"docid": "7850b3ca05b092a3f8476c5f45a712ab",
"text": "The paper presents the application of the classifier fusion to identify electrical appliances present in the household. The analysis is based on the processing of features extracted from the current signal, recorded by the dedicated data acquisition hardware, installed in the vicinity of the energy meter. The selected features are based on the medium frequency measurements. The proposed identification module uses three classifiers of the same type: decision trees, rules and random forest. Experimental results prove the effectiveness of combining various classifiers to the same task and show their advantages and drawbacks. Keywords—NIALM; artificial intelligence; classification; medium frequency measurements",
"title": ""
}
] |
scidocsrr
|
44caae6ddbfd3b3d011bb14a7e66591b
|
ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge
|
[
{
"docid": "992fe771f3fd40cfe4399d7f8aa7822d",
"text": "Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expertcreated resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.",
"title": ""
},
{
"docid": "fb1d84d15fd4a531a3a81c254ad3cab0",
"text": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"title": ""
}
] |
[
{
"docid": "2fbfe1fa8cda571a931b700cbb18f46e",
"text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.",
"title": ""
},
{
"docid": "fef45863bc531960dbf2a7783995bfdb",
"text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.",
"title": ""
},
{
"docid": "9ec6d61511a4533a1622d8b3234fe59d",
"text": "With the development of Web 2.0, many studies have tried to analyze tourist behavior utilizing user-generated contents. The primary purpose of this study is to propose a topic-based sentiment analysis approach, including a polarity classification and an emotion classification. We use the Latent Dirichlet Allocation model to extract topics from online travel review data and analyze the sentiments and emotions for each topic with our proposed approach. The top frequent words are extracted for each topic from online reviews on Ctrip.com. By comparing the relative importance of each topic, we conclude that many tourists prefer to provide “suggestion” reviews. In particular, we propose a new approach to classify the emotions of online reviews at the topic level utilizing an emotion lexicon, focusing on specific emotions to analyze customer complaints. The results reveal that attraction “management” obtains most complaints. These findings may provide useful insights for the development of attractions and the measurement of online destination image. Our proposed method can be used to analyze reviews from many online platforms and domains.",
"title": ""
},
{
"docid": "7e683f15580e77b1e207731bb73b8107",
"text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "39a36a96f354977d137ff736486c37a3",
"text": "In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the follower or adversary) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and a robber (follower) has a chance to observe this strategy over time before choosing its own strategy of where to attack. This paper presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games. This algorithm, DOBSS, is based on a novel and compact mixed-integer linear programming formulation. Compared to the most efficient algorithm known previously for this problem, DOBSS is not only faster, but also leads to higher quality solutions, and does not suffer from problems of infeasibility that were faced by this previous algorithm. Note that DOBSS is at the heart of the ARMOR system that is currently being tested for security scheduling at the Los Angeles International Airport.",
"title": ""
},
{
"docid": "a1aefb622a2beae48f4a1cf5ef96daa0",
"text": "Clarke's classification of situational crime prevention techniques is designed to provide a conceptual analysis of situational strategies, and to offer practical guidance on their use in reducing criminal opportunities. It has developed in parallel with a long program of empirical research, conducted by many researchers, on the situational determinants and the prevention of a wide variety of crimes. For this reason the classification has been subject to constant revision and updating, of which Clarke's (1997) version, which lists 16 such techniques, is the latest. Recently, Wortley (2001) has suggested the need to augment the existing classification, which deals with the analysis of situational opportunities, with a complementary analysis of situational precipitators. These are factors within the crime setting itself that may prompt, provoke, pressure, or permit an individual to offend. The present chapter examines the assumptions underlying the development of situational crime prevention, and offers some views about the theoretical and practical significance of Wortley's suggested additions and revisions. It concludes by proposing a revised classificaCrime Prevention Studies, vol. 16 (2003), pp.41-96. Derek B. Cornish and Ronald V. Clarke tion of 25 techniques to take immediate practical account of some of the concerns raised above. A NEW CRITIQUE OF SITUATIONAL CRIME PREVENTION Until recently, Clarke's (1992, 1997; Clarke and Homel, 1997) classification of situational techniques has been seen as providing a systematic and comprehensive review of methods of environmental crime prevention, and has served to guide practical efforts to reduce offending. Such criticism as has been made of situational techniques has tended to concentrate on their alleged failure to tackle the root causes of crime — that is, to address issues of criminal motivation, and to support programs of social and individual crime prevention — or on the putative threats they pose to civil liberties. These issues have been exhaustively explored elsewhere (von Hirsch et al., 2000) and will not be further examined in this chapter. Recently, however, in a series of carefully argued papers (1996, 1997, 1998, 2001) and in his book, Situational Prison Control (2002), Wortley has offered a challenge from within the field of environmental psychology itself to the theory and practice of situational crime prevention. Wortley s critique centers on what he views as the undue and potentially damaging preoccupation with opportunity variables when discussing offender decision making and situational prevention. He contrasts this with the relative neglect of other situational forces (termed \"precipitators\") within the crime setting that serve to motivate offenders. He identifies four types of precipitator — prompts; pressures; permissions; and provocations — each of which may provide situationally-generated motivation to the hitherto unmotivated. He goes on to offer a two-stage model of situational crime prevention that views offending as the outcome of two sets of situational forces: precipitating factors and regulating factors. Temporal priority is given to the influence of precipitators in motivating the offender, these being followed by the influence of opportunities in regulating whether or not offending actually occurs. He concludes that controlling precipitators is just as important as regulating opportunities, and provides an additional and complementary set of situational crime prevention techniques to control precipitators, claiming that these supply the missing half of a new and more comprehensive situational approach to crime prevention practice. Lastly, he suggests that the development of such a situational framework might better explain and minimize the iatrogenic effects of situational measures in some circumstances.",
"title": ""
},
{
"docid": "56dda298f1033dc3bd381d525678b904",
"text": "This study was undertaken to characterize functions of the outer membrane protein OmpW, which potentially contributes to the development of colistin- and imipenem-resistance in Acinetobacter baumannii. Reconstitution of OmpW in artificial lipid bilayers showed that it forms small channels (23 pS in 1 m KCl) and markedly interacts with iron and colistin, but not with imipenem. In vivo, (55) Fe uptake assays comparing the behaviours of ΔompW mutant and wild-type strains confirmed a role for OmpW in A. baumannii iron homeostasis. However, the loss of OmpW expression did not have an impact on A. baumannii susceptibilities to colistin or imipenem.",
"title": ""
},
{
"docid": "213387a29384a2974b09bfef3085e63e",
"text": "The ease of creating image forgery using image-splicing techniques will soon make our naive trust on image authenticity a tiling of the past. In prior work, we observed the capability of the bicoherence magnitude and phase features for image splicing detection. To bridge the gap between empirical observations and theoretical justifications, in this paper, an image-splicing model based on the idea of bipolar signal perturbation is proposed and studied. A theoretical analysis of the model leads to propositions and predictions consistent with the empirical observations.",
"title": ""
},
{
"docid": "d63543712b2bebfbd0ded148225bb289",
"text": "This paper surveys recent literature in the area of Neural Network, Data Mining, Hidden Markov Model and Neuro-Fuzzy system used to predict the stock market fluctuation. Neural Networks and Neuro-Fuzzy systems are identified to be the leading machine learning techniques in stock market index prediction area. The Traditional techniques are not cover all the possible relation of the stock price fluctuations. There are new approaches to known in-depth of an analysis of stock price variations. NN and Markov Model can be used exclusively in the finance markets and forecasting of stock price. In this paper, we propose a forecasting method to provide better an accuracy rather traditional method. Forecasting stock return is an important financial subject that has attracted researchers’ attention for many years. It involves an assumption that fundamental information publicly available in the past has some predictive relationships to the future stock returns.",
"title": ""
},
{
"docid": "0409255c0804b7ec8332c24b8a8e5806",
"text": "The major aim of this research study was to explore the relationship between test anxiety and academic achievement of students at the post graduate level. A sample of 414 students was randomly selected from seven different science departments in a public sector university in Lahore, Pakistan. Data were collected by using the Test Anxiety Inventory (TAI) developed by Spielberger. Pearson correlation, multivariate statistics and regression analyses were run for data analysis. It was found that a significant negative relationship exists between test anxiety scores and students’ achievement scores. Results showed that a cognitive factor (worry) contributes more in test anxiety than affective factors (emotional). Therefore, it is concluded that test anxiety is one of the factors which are responsible for students’ underachievement and low performance but it can be managed by appropriate training of students in dealing with factors causing test anxiety.",
"title": ""
},
{
"docid": "f8c977ef1fbc72b2be781666c353bdd8",
"text": "The sociolinguistic construct of stancetaking describes the activities through which discourse participants create and signal relationships to their interlocutors, to the topic of discussion, and to the talk itself. Stancetaking underlies a wide range of interactional phenomena, relating to formality, politeness, affect, and subjectivity. We present a computational approach to stancetaking, in which we build a theoretically-motivated lexicon of stance markers, and then use multidimensional analysis to identify a set of underlying stance dimensions. We validate these dimensions intrinsically and extrinsically, showing that they are internally coherent, match pre-registered hypotheses, and correlate with social phenomena.",
"title": ""
},
{
"docid": "9eed08fb9d5d8ae1085f3a615dc5a396",
"text": "This paper describes a compact high-performance orthomode transducer (OMT) with a circular waveguide input and two rectangular waveguide outputs based on the superimposition of three aluminum blocks. Several prototypes operating in the band 1 (31-45 GHz) of the atacama large millimeter array have been fabricated and measured. The design is based on the use of a turnstile junction that is machined in a single block, requiring neither alignment nor a high degree of mechanical tolerances. Thus, a high repeatability of the design is possible for mass production. Across the 31-45 GHz band, the isolation is better than 50 dB and the return losses at the input and outputs of the OMT are better than -25 dB.",
"title": ""
},
{
"docid": "d4da4c9bc129a15a8f7b7094216bc4b2",
"text": "This paper presents a physical description of two specific aspects in drain-extended MOS transistors, i.e., quasi-saturation and impact-ionization effects. The 2-D device simulator Medici provides the physical insights, and both the unique features are originally attributed to the Kirk effect. The transistor dc model is derived from regional analysis of carrier transport in the intrinsic MOS and the drift region. The substrate-current equations, considering extra impact-ionization factors in the drift region, are also rigorously derived. The proposed model is primarily validated by MATLAB program and exhibits excellent scalability for various transistor dimensions, drift-region doping concentration, and voltage-handling capability.",
"title": ""
},
{
"docid": "00d62621cdf9bf8553660cb6daa71c7e",
"text": "Management of data imprecision and uncertainty has become increasingly important, especially in situation awareness and assessment applications where reliability of the decision-making process is critical (e.g., in military battlefields). These applications require the following: 1) an effective methodology for modeling data imperfections and 2) procedures for enabling knowledge discovery and quantifying and propagating partial or incomplete knowledge throughout the decision-making process. In this paper, using a Dempster-Shafer belief-theoretic relational database (DS-DB) that can conveniently represent a wider class of data imperfections, an association rule mining (ARM)-based classification algorithm possessing the desirable functionality is proposed. For this purpose, various ARM-related notions are revisited so that they could be applied in the presence of data imperfections. A data structure called belief itemset tree is used to efficiently extract frequent itemsets and generate association rules from the proposed DS-DB. This set of rules is used as the basis on which an unknown data record, whose attributes are represented via belief functions, is classified. These algorithms are validated on a simplified situation assessment scenario where sensor observations may have caused data imperfections in both attribute values and class labels.",
"title": ""
},
{
"docid": "9548bd2e37fdd42d09dc6828ac4675f9",
"text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.",
"title": ""
},
{
"docid": "f331cb6d4b970829100bfe103a8d8762",
"text": "This paper presents lessons learned from an experiment to reverse engineer a program. A reverse engineering process was used as part of a project to develop an Ada implementation of a Fortran program and upgrade the existing documentation. To accomplish this, design information was extracted from the Fortran source code and entered into a software development environment. The extracted design information was used to implement a new version of the program written in Ada. This experiment revealed issues about recovering design information, such as, separating design details from implementation details, dealing with incomplete or erroneous information, traceability of information between implementation and recovered design, and re-engineering. The reverse engineering process used to recover the design, and the experience gained during the study are reported.",
"title": ""
},
{
"docid": "cbe60b440d1fe792bf6173e9be409958",
"text": "This paper addresses the four enabling technologies, namely multi-user sparse code multiple access (SCMA), content caching, energy harvesting, and physical layer security for proposing an energy and spectral efficient resource allocation algorithm for the access and backhaul links in heterogeneous cellular networks. Although each of the above mentioned issues could be a topic of research, in a real situation, we would face a complicated scenario where they should be considered jointly, and hence, our target is to consider these technologies jointly in a unified framework. Moreover, we propose two novel content delivery scenarios: 1) single frame content delivery (SFCD), and 2) multiple frames content delivery (MFCD), where the time duration of serving user requests is divided into several frames. In the first scenario, the requested content by each user is served over one frame. However, in the second scenario, the requested content by each user can be delivered over several frames. We formulate the resource allocation for the proposed scenarios as optimization problems where our main aim is to maximize the energy efficiency of access links subject to the transmit power and rate constraints of access and backhaul links, caching and energy harvesting constraints, and SCMA codebook allocation limitations. Due to the practical limitations, we assume that the channel state information values between eavesdroppers and base stations are uncertain and design the network for the worst case scenario. Since the corresponding optimization problems are mixed integer non-linear and nonconvex programming, NP-hard, and intractable, we propose an iterative algorithm based on the well-known alternate and successive convex approximation methods. In addition, the proposed algorithms are studied from the computational complexity, convergence, and performance perspectives. Moreover, the proposed caching scheme outperforms the existing traditional caching schemes like random caching and most popular caching. We also study the effect of joint and disjoint considerations of enabling technologies for the performance of nextgeneration networks. We also show that the proposed caching strategy, MFCD and joint solutions have 43%, 9.4% and %51.3 performance gain compared to no cahcing, SFCD and disjoint solutions, respectively.",
"title": ""
},
{
"docid": "19d29667e1632ff6f0a7446de22cdb84",
"text": "Chronic kidney disease (CKD) is defined by persistent urine abnormalities, structural abnormalities or impaired excretory renal function suggestive of a loss of functional nephrons. The majority of patients with CKD are at risk of accelerated cardiovascular disease and death. For those who progress to end-stage renal disease, the limited accessibility to renal replacement therapy is a problem in many parts of the world. Risk factors for the development and progression of CKD include low nephron number at birth, nephron loss due to increasing age and acute or chronic kidney injuries caused by toxic exposures or diseases (for example, obesity and type 2 diabetes mellitus). The management of patients with CKD is focused on early detection or prevention, treatment of the underlying cause (if possible) to curb progression and attention to secondary processes that contribute to ongoing nephron loss. Blood pressure control, inhibition of the renin–angiotensin system and disease-specific interventions are the cornerstones of therapy. CKD complications such as anaemia, metabolic acidosis and secondary hyperparathyroidism affect cardiovascular health and quality of life, and require diagnosis and treatment.",
"title": ""
},
{
"docid": "4816f5155450af9c95ac6910aad7379c",
"text": "In this paper, a novel high step-up converter is proposed for fuel-cell system applications. As an illustration, a two-phase version configuration is given for demonstration. First, an interleaved structure is adapted for reducing input and output ripples. Then, a C¿uk-type converter is integrated to the first phase to achieve a much higher voltage conversion ratio and avoid operating at extreme duty ratio. In addition, additional capacitors are added as voltage dividers for the two phases for reducing the voltage stress of active switches and diodes, which enables one to adopt lower voltage rating devices to further reduce both switching and conduction losses. Furthermore, the corresponding model is also derived, and analysis of the steady-state characteristic is made to show the merits of the proposed converter. Finally, a 200-W rating prototype system is also constructed to verify the effectiveness of the proposed converter. It is seen that an efficiency of 93.3% can be achieved when the output power is 150-W and the output voltage is 200-V with 0.56 duty ratio.",
"title": ""
},
{
"docid": "c4f6ccec24ff18ba839a83119b125f04",
"text": "The growing rehabilitation and consumer movement toward independent community living for disabled adults has placed new demands on the health care delivery system. ProgTams must be developed for the disabled adult that provide direct training in adaptive community skills, such as banking, budgeting, consumer advocacy, personal health care, and attendant management. An Independent Living Skills Training Program that uses a psychoeducational model is described. To date, 17 multiply handicapped adults, whose average length of institutionalization was I 1.9 years, have participated in the program. Of these 17, 58.8% returned to community living and 23.5% are waiting for openings m accessible housing units.",
"title": ""
}
] |
scidocsrr
|
67f69cc470b7b2c4540ca5ee3d029e89
|
Challenges in multimodal gesture recognition
|
[
{
"docid": "69f4e9818cc5b37f0ce6410cc970944c",
"text": "In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments.",
"title": ""
}
] |
[
{
"docid": "4aa5d61090755d6755aa172e75123a4e",
"text": "Intravenous administration of vitamin C has been shown to decrease oxidative stress and, in some instances, improve physiological function in adult humans. Oral vitamin C administration is typically less effective than intravenous, due in part to inferior vitamin C bioavailability. The purpose of this study was to determine the efficacy of oral delivery of vitamin C encapsulated in liposomes. On 4 separate randomly ordered occasions, 11 men and women were administered an oral placebo, or 4 g of vitamin C via oral, oral liposomal, or intravenous delivery. The data indicate that oral delivery of 4 g of vitamin C encapsulated in liposomes (1) produces circulating concentrations of vitamin C that are greater than unencapsulated oral but less than intravenous administration and (2) provides protection from ischemia-reperfusion-mediated oxidative stress that is similar to the protection provided by unencapsulated oral and intravenous administrations.",
"title": ""
},
{
"docid": "56642ffad112346186a5c3f12133e59b",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "c8eb002905d817848ad7dadf31fb6875",
"text": "摘要:为寻找路内停车泊位在道 路上巡游的车辆是造成交通拥堵 的原因之一。首先指出路内停车 低价与路外停车高价共同刺激驾 驶人选择巡游而非使用路外停车 场(库)。其次针对洛杉矶的巡游 开展调查,得到车辆平均巡游时耗 和距离,并得出每年由此产生的车 辆行驶里程、时间浪费、燃油消耗 及 CO2排放。然后通过分析路内 停车价格与车辆巡游的关系,提出 城市全日泊位使用率大约为 85% 时,停车泊位处于供需平衡状态, 停车价格较合理。接着以加利福 尼亚格雷伍德城停车价格制定及 停车收益使用为例,给出停车增量 融资的收益回馈方式。最后指出, 为了减少拥堵、降低温室气体排 放、改善街区环境等,城市应制定 合理的路内停车价格并将产生的 收益用于改善公共服务。 Abstract: Cruising for on-street parking causes traffic congestion. This paper first points out that the combination of low prices for on-street parking and high prices for off-street parking increases the incentive for drivers to search for on-street parking spaces. Through a survey of cruising for parking in Los Angeles, the author obtains average cruising time and distance, as well as corresponding vehicle-miles traveled, waste of time, extra fuel consumption, and CO2 emissions. The author concludes that when the occupancy rate of on-street parking is about 85%, the demand and supply of parking are in balance and the price is just right. Taking the parking charges and revenue usage in Redwood City in California as an example, the paper discusses parking increment finance. Finally, the paper emphasizes that it is necessary to charge properly for on-street parking and spend the resulting revenue to improve local public services in order to reduce traffic congestion and greenhouse gas emissions, and improve neighborhoods. 关键词:停车管理;路内停车;车辆 巡游;停车价格;收益管理",
"title": ""
},
{
"docid": "35405b001d9fc22aa3457d83a8be433d",
"text": "A mouse was modified to add tactile feedback via a solenoid-driven pin projecting through a hole in the left mouse button. An experiment is described using a target selection task under five different sensory feedback conditions ('normal', auditory, colour, tactile, and combined). No differences were found in overall response times, error rates, or bandwidths; however, significant differences were found in the final positioning times (from the cursor entering the target to selecting the target). For the latter, tactile feedback was the quickest, normal feedback was the slowest. An examination of the spatial distributions in responses showed a peaked, narrow distribution for the normal condition, and a flat, wide distribution for the tactile (and combined) conditions. It is argued that tactile feedback allows subjects to use a wider area of the target and to select targets more quickly once the cursor is inside the target. Design considerations for human-computer interfaces are discussed.",
"title": ""
},
{
"docid": "53e7c26ce6abc85d721b2f1661d1c3c0",
"text": "For the detail mapping there are multiple methods that can be used. In Battlefield 2, a 256 m patch of the terrain could have up to six different tiling detail maps that were blended together using one or two three-component unique detail mask textures (Figure 4) that controlled the visibility of the individual detail maps. Artists would paint or generate the detail masks just as for the color map.",
"title": ""
},
{
"docid": "19b602b49f0fcd51f5ec7f240fe26d60",
"text": "Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.",
"title": ""
},
{
"docid": "3c8530b7a9b0b465c1fbfcd016cd098d",
"text": "From a psychophysiological point of view, arousal is a fundamental feature of behavior. As reported in different empirical studies based on insights from theories of consumer behavior, store atmosphere should evoke phasic arousal reactions to attract consumers. Most of these empirical investigations used verbal scales to measure consumers' perceived phasic arousal at the point-of-sale (POS). However, the validity of verbal arousal measurement is questioned; self-reporting methods only allow a time-lagged measurement. Furthermore, the selection of inappropriate items to represent perceived arousal is criticized, and verbal reports require some form of cognitive evaluation of perceived arousal by the individual, who might (in a non-measurement condition) not even be aware of the arousal. By contrast, phasic electrodermal reaction (EDR) has proven to be the most appropriate and valid indicator for measuring arousal [W. Boucsein, Physiologische Grundlagen und Messmethoden der dermalen Aktivität. In: F. Rösler (Ed.), Enzyklopädie der Psychologie, Bereich Psychophysiologie, Band 1: Grundlagen and Methoden der Psychophysiologie, Kapitel, Vol. 7, Hogrefe, Göttingen, 2001, pp. 551-623] that could be relevant to behavior. EDR can be recorded simultaneously to the perception of stimuli. Furthermore, telemetric online device can be used, which enables physiological arousal measurement while participants can move freely through the store and perform the assigned task in the experiments. The present paper delivers insights on arousal theory and results from empirical studies using EDR to measure arousal at the POS.",
"title": ""
},
{
"docid": "e4a22b34510b28d1235fc987b97a8607",
"text": "Many regions of the globe are experiencing rapid urban growth, the location and intensity of which can have negative effects on ecological and social systems. In some locales, planners and policy makers have used urban growth boundaries to direct the location and intensity of development; however the empirical evidence for the efficacy of such policies is mixed. Monitoring the location of urban growth is an essential first step in understanding how the system has changed over time. In addition, if regulations purporting to direct urban growth to specific locales are present, it is important to evaluate if the desired pattern (or change in pattern) has been observed. In this paper, we document land cover and change across six dates (1986, 1991, 1995, 1999, 2002, and 2007) for six counties in the Central Puget Sound, Washington State, USA. We explore patterns of change by three different spatial partitions (the region, each county, 2000 U.S. Census Tracks), and with respect to urban growth boundaries implemented in the late 1990’s as part of the state’s Growth Management Act. Urban land cover increased from 8 to 19% of the study area between 1986 and 2007, while lowland deciduous and mixed forests decreased from 21 to 13% and grass and agriculture decreased from 11 to 8%. Land in urban classes outside of the urban growth boundaries increased more rapidly (by area and percentage of new urban land cover) than land within the urban growth boundaries, suggesting that the intended effect of the Growth Management Act to direct growth to within the urban growth boundaries may not have been accomplished by 2007. Urban sprawl, as estimated by the area of land per capita, increased overall within the region, with the more rural counties within commuting distance to cities having the highest rate of increase observed. Land cover data is increasingly available and can be used to rapidly evaluate urban development patterns over large areas. Such data are important inputs for policy makers, urban planners, and modelers alike to manage and plan for future population, land use, and land cover changes.",
"title": ""
},
{
"docid": "5b0842894cbf994c3e63e521f7352241",
"text": "The burgeoning field of genomics has revived interest in multiple testing procedures by raising new methodological and computational challenges. For example, microarray experiments generate large multiplicity problems in which thousands of hypotheses are tested simultaneously. Westfall and Young (1993) propose resampling-based p-value adjustment procedures which are highly relevant to microarray experiments. This article discusses different criteria for error control in resampling-based multiple testing, including (a) the family wise error rate of Westfall and Young (1993) and (b) the false discovery rate developed by Benjamini and Hochberg (1995), both from a frequentist viewpoint; and (c) the positive false discovery rate of Storey (2002a), which has a Bayesian motivation. We also introduce our recently developed fast algorithm for implementing the minP adjustment to control family-wise error rate. Adjusted p-values for different approaches are applied to gene expression data from two recently published microarray studies. The properties of these procedures for multiple testing are compared.",
"title": ""
},
{
"docid": "38a4f83778adea564e450146060ef037",
"text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.",
"title": ""
},
{
"docid": "f1f7f8eb67488defd524800c12bd10ad",
"text": "As a serious concern in data publishing and analysis, privacy preserving data processing has received a lot of attention. Privacy preservation often leads to information loss. Consequently, we want to minimize utility loss as long as the privacy is preserved. In this chapter, we survey the utility-based privacy preservation methods systematically. We first briefly discuss the privacy models and utility measures, and then review four recently proposed methods for utilitybased privacy preservation. We first introduce the utility-based anonymization method for maximizing the quality of the anonymized data in query answering and discernability. Then we introduce the top-down specialization (TDS) method and the progressive disclosure algorithm (PDA) for privacy preservation in classification problems. Last, we introduce the anonymized marginal method, which publishes the anonymized projection of a table to increase the utility and satisfy the privacy requirement.",
"title": ""
},
{
"docid": "d14feb8d44dba9ba637ba28b6d44c2bd",
"text": "Today there are numerous different converter topologies and power semiconductor devices used in medium- voltage drive systems. This paper provides a general overview of the common converter topologies available on the market and their corresponding major characteristics. The different topologies are compared and evaluated with respect to their semiconductor effort. Due to the available power semiconductor devices with maximum blocking voltages of 6.5 kV, the drive market with power ratings up to 25 MW is dominated by voltage source inverters in IGBT as well as in IGCT technology. For higher power demands and special applications, thyristor converters are still frequently used.",
"title": ""
},
{
"docid": "ce0835dbc5ec411d82618dcefac371a9",
"text": "The mammalian target of rapamycin (mTOR) is a master regulator of cell growth and division that responds to a variety of stimuli, including nutrient, energy, and growth factors. In the last years, a significant number of pieces have been added to the puzzle of how mTOR coordinates and executes its functions. Extensive research on mTOR has also uncovered a complex network of regulatory loops that impact the therapeutic approaches aimed at targeting mTOR.",
"title": ""
},
{
"docid": "f87fea9cd76d1545c34f8e813347146e",
"text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.",
"title": ""
},
{
"docid": "c6967ff67346894766f810f44a6bb6bc",
"text": "Knowledge about the effects of physical exercise on brain is accumulating although the mechanisms through which exercise exerts these actions remain largely unknown. A possible involvement of adult hippocampal neurogenesis (AHN) in the effects of exercise is debated while the physiological and pathological significance of AHN is under intense scrutiny. Recently, both neurogenesis-dependent and independent mechanisms have been shown to mediate the effects of physical exercise on spatial learning and anxiety-like behaviors. Taking advantage that the stimulating effects of exercise on AHN depend among others, on serum insulin-like growth factor I (IGF-I), we now examined whether the behavioral effects of running exercise are related to variations in hippocampal neurogenesis, by either increasing or decreasing it according to serum IGF-I levels. Mutant mice with low levels of serum IGF-I (LID mice) had reduced AHN together with impaired spatial learning. These deficits were not improved by running. However, administration of exogenous IGF-I ameliorated the cognitive deficit and restored AHN in LID mice. We also examined the effect of exercise in LID mice in the novelty-suppressed feeding test, a measure of anxiety-like behavior in laboratory animals. Normal mice, but not LID mice, showed reduced anxiety after exercise in this test. However, after exercise, LID mice did show improvement in the forced swim test, a measure of behavioral despair. Thus, many, but not all of the beneficial effects of exercise on brain function depend on circulating levels of IGF-I and are associated to increased hippocampal neurogenesis, including improved cognition and reduced anxiety.",
"title": ""
},
{
"docid": "49d50ed96ff7bfa5246561b0c51876af",
"text": "Nutch is an open-source Web search engine that can be used at global, local, and even personal scale. Its initial design goal was to enable a transparent alternative for global Web search in the public interest — one of its signature features is the ability to “explain” its result rankings. Recent work has emphasized how it can also be used for intranets; by local communities with richer data models, such as the Creative Commons metadata-enabled search for licensed content; on a personal scale to index a user's files, email, and web-surfing history; and we also report on several other research projects built on Nutch. In this paper, we present how the architecture of the Nutch system enables it to be more flexible and scalable than other comparable systems today.",
"title": ""
},
{
"docid": "577e229bb458d01fcf72119956844bb2",
"text": "This paper examines the role of culture as a factor in enhancing the effectiveness of health communication. We describe culture and how it may be applied in audience segmentation and introduce a model of health communication planning--McGuire's communication/persuasion model--as a framework for considering the ways in which culture may influence health communication effectiveness. For three components of the model (source, message, and channel factors), the paper reviews how each affects communication and persuasion, and how each may be affected by culture. We conclude with recommendations for future research on culture and health communication.",
"title": ""
},
{
"docid": "c3d9edf18c1f33bd7947aaf0f3211c83",
"text": "We introduce an algorithm for 3D object modeling where the user draws creative inspiration from an object captured in a single photograph. Our method leverages the rich source of photographs for creative 3D modeling. However, with only a photo as a guide, creating a 3D model from scratch is a daunting task. We support the modeling process by utilizing an available set of 3D candidate models. Specifically, the user creates a digital 3D model as a geometric variation from a 3D candidate. Our modeling technique consists of two major steps. The first step is a user-guided image-space object segmentation to reveal the structure of the photographed object. The core step is the second one, in which a 3D candidate is automatically deformed to fit the photographed target under the guidance of silhouette correspondence. The set of candidate models have been pre-analyzed to possess useful high-level structural information, which is heavily utilized in both steps to compensate for the ill-posedness of the analysis and modeling problems based only on content in a single image. Equally important, the structural information is preserved by the geometric variation so that the final product is coherent with its inherited structural information readily usable for subsequent model refinement or processing. Links: DL PDF WEB VIDEO",
"title": ""
},
{
"docid": "89703b730ff63548530bdb9e2ce59c6b",
"text": "How to develop creative digital products which really meet the prosumer's needs while promoting a positive user experience? That question has guided this work looking for answers through different disciplinary fields. Born on 2002 as an Engineering PhD dissertation, since 2003 the method has been improved by teaching it to Communication and Design graduate and undergraduate courses. It also guided some successful interdisciplinary projects. Its main focus is on developing a creative conceptual model that might meet a human need within its context. The resulting method seeks: (1) solutions for the main problems detected in the previous versions; (2) significant ways to represent Design practices; (3) a set of activities that could be developed by people without programming knowledge. The method and its research current state are presented in this work.",
"title": ""
},
{
"docid": "9e865969535469357f2600985750d78e",
"text": "Patients with pathological laughter and crying (PLC) are subject to relatively uncontrollable episodes of laughter, crying or both. The episodes occur either without an apparent triggering stimulus or following a stimulus that would not have led the subject to laugh or cry prior to the onset of the condition. PLC is a disorder of emotional expression rather than a primary disturbance of feelings, and is thus distinct from mood disorders in which laughter and crying are associated with feelings of happiness or sadness. The traditional and currently accepted view is that PLC is due to the damage of pathways that arise in the motor areas of the cerebral cortex and descend to the brainstem to inhibit a putative centre for laughter and crying. In that view, the lesions 'disinhibit' or 'release' the laughter and crying centre. The neuroanatomical findings in a recently studied patient with PLC, along with new knowledge on the neurobiology of emotion and feeling, gave us an opportunity to revisit the traditional view and propose an alternative. Here we suggest that the critical PLC lesions occur in the cerebro-ponto-cerebellar pathways and that, as a consequence, the cerebellar structures that automatically adjust the execution of laughter or crying to the cognitive and situational context of a potential stimulus, operate on the basis of incomplete information about that context, resulting in inadequate and even chaotic behaviour.",
"title": ""
}
] |
scidocsrr
|
6d25be1628049db9c842cac1eb68f7b1
|
Efficient Feature Selection Technique for Network Intrusion Detection System Using Discrete Differential Evolution and Decision
|
[
{
"docid": "320c7c49dd4341cca532fa02965ef953",
"text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.",
"title": ""
},
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] |
[
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "1a3ac29d83c04225f6f69eaf3d263139",
"text": "In a web environment, one of the most evolving application is those with recommendation system (RS). It is a subset of information filtering systems wherein, information about certain products or services or a person are categorized and are recommended for the concerned individual. Most of the authors designed collaborative movie recommendation system by using K-NN and K-means but due to a huge increase in movies and users quantity, the neighbour selection is getting more problematic. We propose a hybrid model based on movie recommender system which utilizes type division method and classified the types of the movie according to users which results reduce computation complexity. K-Means provides initial parameters to particle swarm optimization (PSO) so as to improve its performance. PSO provides initial seed and optimizes fuzzy c-means (FCM), for soft clustering of data items (users), instead of strict clustering behaviour in K-Means. For proposed model, we first adopted type division method to reduce the dense multidimensional data space. We looked up for techniques, which could give better results than K-Means and found FCM as the solution. Genetic algorithm (GA) has the limitation of unguided mutation. Hence, we used PSO. In this article experiment performed on Movielens dataset illustrated that the proposed model may deliver high performance related to veracity, and deliver more predictable and personalized recommendations. When compared to already existing methods and having 0.78 mean absolute error (MAE), our result is 3.503 % better with 0.75 as the MAE, showed that our approach gives improved results.",
"title": ""
},
{
"docid": "42b6c55e48f58e3e894de84519cb6feb",
"text": "What social value do Likes on Facebook hold? This research examines peopleâs attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which peopleâs friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.",
"title": ""
},
{
"docid": "5113c0c93cf1c5592b5f2e96fa98ade5",
"text": "There has been a host of research works on wireless sensor networks (WSN) for medical applications. However, the major shortcoming of these efforts is a lack of consideration of data management. Indeed, the huge amount of high sensitive data generated and collected by medical sensor networks introduces several challenges that existing architectures cannot solve. These challenges include scalability, availability and security. Furthermore, WSNs for medical applications provide useful and real information about patients’ health state. This information should be available for healthcare providers to facilitate response and to improve the rescue process of a patient during emergency. Hence, emergency management is another challenge for medical wireless sensor networks. In this paper, we propose an innovative architecture for collecting and accessing large amount of data generated by medical sensor networks. Our architecture overcomes all the aforementioned challenges and makes easy information sharing between healthcare professionals in normal and emergency situations. Furthermore, we propose an effective and flexible security mechanism that guarantees confidentiality, integrity as well as fine grained access control to outsourced medical data. This mechanism relies on Ciphertext Policy Attribute-based Encryption (CP-ABE) to achieve high flexibility and performance. Finally, we carry out extensive simulations that allow showing that our scheme provides an efficient, fine-grained and scalable access control in normal and emergency situations.",
"title": ""
},
{
"docid": "960f5bd8b673236d3b44a77e876e10c4",
"text": "This paper describes an approach to harvesting electrical energy from a mechanically excited piezoelectric element. A vibrating piezoelectric device differs from a typical electrical power source in that it has a capacitive rather than inductive source impedance, and may be driven by mechanical vibrations of varying amplitude. An analytical expression for the optimal power flow from a rectified piezoelectric device is derived, and an “energy harvesting” circuit is proposed which can achieve this optimal power flow. The harvesting circuit consists of an ac–dc rectifier with an output capacitor, an electrochemical battery, and a switch-mode dc–dc converter that controls the energy flow into the battery. An adaptive control technique for the dc–dc converter is used to continuously implement the optimal power transfer theory and maximize the power stored by the battery. Experimental results reveal that use of the adaptive dc–dc converter increases power transfer by over 400% as compared to when the dc–dc converter is not used.",
"title": ""
},
{
"docid": "28b23fc65a17b2b29e4e2a6b78ab401b",
"text": "In 1980, the N400 event-related potential was described in association with semantic anomalies within sentences. When, in 1992, a second waveform, the P600, was reported in association with syntactic anomalies and ambiguities, the story appeared to be complete: the brain respected a distinction between semantic and syntactic representation and processes. Subsequent studies showed that the P600 to syntactic anomalies and ambiguities was modulated by lexical and discourse factors. Most surprisingly, more than a decade after the P600 was first described, a series of studies reported that semantic verb-argument violations, in the absence of any violations or ambiguities of syntax can evoke robust P600 effects and no N400 effects. These observations have raised fundamental questions about the relationship between semantic and syntactic processing in the brain. This paper provides a comprehensive review of the recent studies that have demonstrated P600s to semantic violations in light of several proposed triggers: semantic-thematic attraction, semantic associative relationships, animacy and semantic-thematic violations, plausibility, task, and context. I then discuss these findings in relation to a unifying theory that attempts to bring some of these factors together and to link the P600 produced by semantic verb-argument violations with the P600 evoked by unambiguous syntactic violations and syntactic ambiguities. I suggest that normal language comprehension proceeds along at least two competing neural processing streams: a semantic memory-based mechanism, and a combinatorial mechanism (or mechanisms) that assigns structure to a sentence primarily on the basis of morphosyntactic rules, but also on the basis of certain semantic-thematic constraints. I suggest that conflicts between the different representations that are output by these distinct but interactive streams lead to a continued combinatorial analysis that is reflected by the P600 effect. I discuss some of the implications of this non-syntactocentric, dynamic model of language processing for understanding individual differences, language processing disorders and the neuroanatomical circuitry engaged during language comprehension. Finally, I suggest that that these two processing streams may generalize beyond the language system to real-world visual event comprehension.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "9f97fffcb1b0a1f92443c9c769438cf5",
"text": "A literature review was done within a revision of a guideline concerned with data quality management in registries and cohort studies. The review focused on quality indicators, feedback, and source data verification. Thirty-nine relevant articles were selected in a stepwise selection process. The majority of the papers dealt with indicators. The papers presented concepts or data analyses. The leading indicators were related to case or data completeness, correctness, and accuracy. In the future, data pools as well as research reports from quantitative studies should be obligatory supplemented by information about their data quality, ideally picking up some indicators presented in this review.",
"title": ""
},
{
"docid": "b2b38addb1283374ef35d4621b34adaf",
"text": "106 AI MAGAZINE RTS games — such as StarCraft by Blizzard Entertainment and Command and Conquer by Electronic Arts — are popular video games that can be described as real-time war simulations in which players delegate units under their command to gather resources, build structures, combat and support units, scout opponent locations, and attack. The winner of an RTS game usually is the player or team that destroys the opponents’ structures first. Unlike abstract board games like chess and go, moves in RTS games are executed simultaneously at a rate of at least eight frames per second. In addition, individual moves in RTS games can consist of issuing simultaneous orders to hundreds of units at any given time. If this wasn’t creating enough complexity already, RTS game maps are also usually large and states are only partially observable, with vision restricted to small areas around friendly units and structures. Complexity by itself, of course, is not a convincing motivation for studying RTS games and building AI systems for them. What makes them attractive research subjects is the fact that, despite the perceived complexity, humans are able to outplay machines by means of spatial and temporal reasoning, long-range adversarial planning and plan",
"title": ""
},
{
"docid": "861602891ab4ee40dc6fde90c0d6c5bf",
"text": "Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20 Hz, and is suitable for online operation in the fields.",
"title": ""
},
{
"docid": "6d14507a3c2da88dda967bb8611b44ca",
"text": "In image-sentence retrieval task, correlated images and sentences involve different levels of semantic relevance. However, existing multi-modal representation learning paradigms fail to capture the meaningful component relation on word and phrase level, while the attention-based methods still suffer from component-level mismatching and huge computation burden. We propose a Joint Global and Co-Attentive Representation learning method (JGCAR) for image-sentence retrieval. We formulate a global representation learning task which utilizes both intra-modal and inter-modal relative similarity to optimize the semantic consistency of the visual/textual component representations. We further develop a co-attention learning procedure to fully exploit different levels of visual-linguistic relations. We design a novel softmax-like bi-directional ranking loss to learn the co-attentive representation for image-sentence similarity computation. It is capable of discovering the correlative components and rectifying inappropriate component-level correlation to produce more accurate sentence-level ranking results. By joint global and co-attentive representation learning, the latter benefits from the former by producing more semantically consistent component representation, and the former also benefits from the latter by back-propagating the contextual information. Image-sentence retrieval is performed as a two-step process in the testing stage, inheriting advantages on both effectiveness and efficiency. Experiments show that JGCAR outperforms existing methods on MSCOCO and Flickr30K image-sentence retrieval tasks.",
"title": ""
},
{
"docid": "c4cfd9364c271e0af23a03c28f5c95ad",
"text": "Due to the different posture and view angle, the image will appear some objects that do not exist in another image of the same person captured by another camera. The region covered by new items adversely improved the difficulty of person re-identification. Therefore, we named these regions as Damaged Region (DR). To overcome the influence of DR, we propose a new way to extract feature based on the local region that divides both in the horizontal and vertical directions. Before splitting the image, we enlarge it with direction to increase the useful information, potentially reducing the impact of different viewing angles. Then each divided region is a separated part, and the results of the adjacent regions will be compared. As a result the region that gets a higher score is selected as the valid one, and which gets the lower score caused by pose variation and items occlusion will be invalid. Extensive experiments carried out on three person re-identification benchmarks, including VIPeR, PRID2011, CUHK01, clearly show the significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "76e7f63fa41d6d457e6e4386ad7b9896",
"text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.",
"title": ""
},
{
"docid": "0b2ca56dd2f60cc6781e76abce411a44",
"text": "The demand for increased software quality has resulted in quality being more of differentiator between products than it ever has been before. For this reason, software developers need objective and valid measures for use in the evaluation and improvement of product quality from the initial stages of development. Class diagrams are a key artifact in the development of object-oriented (OO) software because they lay the foundation for all later design and implementation work. It follows that emphasizing class diagram quality may significantly contribute to higher quality OO software systems. The primary aim of this work, therefore, is to present a survey, as complete as possible, of the existing relevant works regarding class diagram metrics. Thus, from works previously published, researchers and practitioners alike may gain broad and ready access to insights for measuring these quality characteristics. Another aim of this work is to help reveal areas of research either lacking completion or yet to undertaken.",
"title": ""
},
{
"docid": "ae218abd859370a093faf83d6d81599d",
"text": "In this letter, we present an autofocus routine for backprojection imagery from spotlight-mode synthetic aperture radar data. The approach is based on maximizing image sharpness and supports the flexible collection and imaging geometries of BP, including wide-angle apertures and the ability to image directly onto a digital elevation map. While image-quality-based autofocus approaches can be computationally intensive, in the backprojection setting, we demonstrate a natural geometric interpretation that allows for optimal single-pulse phase corrections to be derived in closed form as the solution of a quartic polynomial. The approach is applicable to focusing standard backprojection imagery, as well as providing incremental focusing in sequential imaging applications based on autoregressive backprojection. An example demonstrates the efficacy of the approach applied to real data for a wide-aperture backprojection image.",
"title": ""
},
{
"docid": "9a758183aa6bf6ee8799170b5a526e7e",
"text": "The field of serverless computing has recently emerged in support of highly scalable, event-driven applications. A serverless application is a set of stateless functions, along with the events that should trigger their activation. A serverless runtime allocates resources as events arrive, avoiding the need for costly pre-allocated or dedicated hardware. \nWhile an attractive economic proposition, serverless computing currently lags behind the state of the art when it comes to function composition. This paper addresses the challenge of programming a composition of functions, where the composition is itself a serverless function. \nWe demonstrate that engineering function composition into a serverless application is possible, but requires a careful evaluation of trade-offs. To help in evaluating these trade-offs, we identify three competing constraints: functions should be considered as black boxes; function composition should obey a substitution principle with respect to synchronous invocation; and invocations should not be double-billed. \nFurthermore, we argue that, if the serverless runtime is limited to a reactive core, i.e. one that deals only with dispatching functions in response to events, then these constraints form the serverless trilemma. Without specific runtime support, compositions-as-functions must violate at least one of the three constraints. \nFinally, we demonstrate an extension to the reactive core of an open-source serverless runtime that enables the sequential composition of functions in a trilemma-satisfying way. We conjecture that this technique could be generalized to support other combinations of functions.",
"title": ""
},
{
"docid": "2418cf34f09335d6232193b21ee7ae49",
"text": "The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.",
"title": ""
},
{
"docid": "03b0b466bf3fb6a8ff5180c0d8a08861",
"text": "In power-limited wireless devices such as wireless sensor networks, wearable components, and Internet of Things devices energy efficiency is a critical concern. These devices are usually battery operated and have a radio transceiver that is typically their most power-hungry block. Wake-up radio schemes can be used to achieve a reasonable balance among energy consumption, range, data receiving capabilities and response time. In this paper, a high-sensitivity low power wake-up radio receiver (WUR) for wireless sensor networks is presented. The wake-up radio is comprised of a fully passive differential RF-to-DC converter that rectifies the incident RF signal, a low-power comparator and an ultra low power microcontroller to detects the envelope of the on-off keying (OOK) wake-up data used as address. We designed and implemented a novel low power tunable wake up radio with addressing capability, a minimal power consumption of only 196nW and a maximum sensitivity of -55dBm and minimal wake up time of 130μs without addressing and around 1,6ms with 2byte addressing at 10Kbit/s data rate. The flexibility of the solution makes the wake up radio suitable for both power constrained low range application (such as Body Area Network) or applications with long range needs. The wake up radio can work also at different frequencies and the addressing capability directly on board helps reduce false positives. Experimental on field results demonstrate the low power of the solution, the high sensitivity and the functionality.",
"title": ""
},
{
"docid": "bf5fe44d0c4f2d2a897fc8409508a0f1",
"text": "For the past decade, an increasing number of studies have demonstrated that when individuals wnte about emotional experiences, significant physical and mental health improvements follow The basic paradigm and findings are summarized along with some boundary conditions Although a reduction tn inhibition may contribute to the disclosure phenomenon changes in basic cognitive and linguistic processes during writing predict better health Implications for theory and treatment are discussed Virtually all forms of psychotherapy—from psychoanalysis to behavioral and cognitive therapies—have been shown to reduce distress and to promote physical and mental well-being (Mumford, Schlesinger, & Glass, 1983, Smith, Glass, & Miller, 1980) A process common to most therapies is labeling the problem and discussing its causes and consequences Further, participating in therapy presupposes that the individual acknowledges the existence of a problem and openly discusses It with another person As discussed in this article, the mere act of disclosure is a powerful therapeutic agent that may account for a substantial percentage of the vanance m the healing process PARAMETERS OF WRITING AND TALKING ASSOCIATED WITH HEALTH IMPROVEMENTS Over the past decade, several laboratories have been explonng the value of writing or talking about emotional experiences Confronting deeply personal issues has been found to promote physical health, subjective well-being, and selected adaptive behaviors In this section, the general findings of the disclosure paradigm are discussed Whereas individuals have been asked to disclose personal expenences through talking in a few studies, most studies involve wnting The Basic Wnting Paradigm The standard laboratory wnting technique has involved randomly .signing each participant to one of two or more groups All wnting groups are asked to wnte about assigned topics for 3 to 5 consecutive days, 15 to 30 mm each day Wnting is generally done m the laboy with no feedback given Participants assigned to the control conditions are typically asked to wnte about superficial topics, such as how they use their time The standard instructions for those assigned to the expenmental group are a vanation on the following the next 3 days, I would like for you to wnte about your very deepest thoughts and feeling about an extremely important emoUonal issue that has affected you and your life In your wnting I d like you to really let go and explore your very deepest emoUons and thoughts You might ue your topic to Address correspondence to James W Pennebaker Department of Psychology, Southern Methodist University, Dallas, TX 75275 e-mail pennebak® your relationships with others including parents lovers fnends, or relatives, to your past your present or your future or to who you have been, who you would like to be or who you are now You may wnte about the same general issues or expenences on all days of wnung or on different topics each day All of your wnting will be completely confidential Don't worry about spelling, sentence structure, or grammar The only rule is that once you begin wnting, continue to do so until your time is up The writing paradigm is exceptionally powerful Paiticipantsfrom children to the elderly, from honor students to maximun secunty pnsoners—disclose a remarkable range and depth of traumatic expenences Lost loves, deaths, incidents of sexual and physical abuse, and tragic failures are common themes in all of the studiei nothing else, the paradigm demonstrates that when individuals are given the opportunity to disclose deeply personal aspects of their lives, they readily do so Even though a large number of participants report crying or being deeply upset by the expenence, the overwhelming majonty report that the wnting expenence was valuable and meaningful in their lives EfTects of Disclosure on Outcome Measures Researchers have relied on a vanety of physical and mental health measures to evaluate the effect of wntmg As depicted in Table 1, wnting or talking about emotional expenences, relative to wnting about superficial control topics, has been found to be associated w significant drops in physician visits from before to after wnting among relatively healthy samples Wnting or talking about emotional topics has also been found to have beneficial influences on lmm function, including t-helper cell growth (using a blastogenesis procedure with the mitogen phytohemagglutmui), antibody response t Epstein-Barr virus, and antibody response to hepatitis B vaccination; Disclosure also has produced short-term changes in autonomic activ lty (e g , lowered heart rate and electrodermal activity) and muscula activity (l e , reduced phasic comigator acUvity) Self-reports also suggest that wntmg about upsetting expenences, although painful in the days of wntmg, produces long-term lmproveis in mood and indicators of well-being compared with wnting about control topics Although a number of studies have failed to find lstent effects on mood or self-reported distress, Smyth's (1996) It meta-analysis on wntten-disclosure studies indicates that, li general, wnting about emotional topics is associated with significant reductions in distress Behavioral changes have also been found Students who wnte about emotional topics show improvements in grades m the months following the study Senior professionals who have been laid off from their jobs get new jobs more quickly af^er wnting Consistent with the direct health measures, university staff members who wnte about emotional topics are subsequently absent from their work at lower than control participants Interestingly, relatively few reliable changes emerge using self-reports of health-related behaviors That is Copynght © 1997 Amencan Psychological Society VOL 8, NO 3, MAY 1997 PSYCHOLOGICAL SCIENCE",
"title": ""
}
] |
scidocsrr
|
aecacf022b621cd60dc51cd6b351686b
|
A Survey of Uncertain Data Algorithms and Applications
|
[
{
"docid": "5f1f7847600207d1216384f8507be63b",
"text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.",
"title": ""
}
] |
[
{
"docid": "b39d393c8fd817f487e8bdfd59d03a55",
"text": "This paper gives an overview of the upcoming IEEE Gigabit Wireless LAN amendments, i.e. IEEE 802.11ac and 802.11ad. Both standard amendments advance wireless networking throughput beyond gigabit rates. 802.11ac adds multi-user access techniques in the form of downlink multi-user (DL MU) multiple input multiple output (MIMO)and 80 and 160 MHz channels in the 5 GHz band for applications such as multiple simultaneous video streams throughout the home. 802.11ad takes advantage of the large swath of available spectrum in the 60 GHz band and defines protocols to enable throughput intensive applications such as wireless I/O or uncompressed video. New waveforms for 60 GHz include single carrier and orthogonal frequency division multiplex (OFDM). Enhancements beyond the new 60 GHz PHY include Personal Basic Service Set (PBSS) operation, directional medium access, and beamforming. We describe 802.11ac channelization, PHY design, MAC modifications, and DL MU MIMO. For 802.11ad, the new PHY layer, MAC enhancements, and beamforming are presented.",
"title": ""
},
{
"docid": "afa31fe73b190845f65a5e163b062acf",
"text": "Spatial variability in a crop field creates a need for precision agriculture. Economical and rapid means of identifying spatial variability is obtained through the use of geotechnology (remotely sensed images of the crop field, image processing, GIS modeling approach, and GPS usage) and data mining techniques for model development. Higher-end image processing techniques are followed to establish more precision. The goal of this paper was to investigate the strength of key spectral vegetation indices for agricultural crop yield prediction using neural network techniques. Four widely used spectral indices were investigated in a study of irrigated corn crop yields in the Oakes Irrigation Test Area research site of North Dakota, USA. These indices were: (a) red and near-infrared (NIR) based normalized difference vegetation index (NDVI), (b) green and NIR based green vegetation index (GVI), (c) red and NIR based soil adjusted vegetation index (SAVI), and (d) red and NIR based perpendicular vegetation index (PVI). These four indices were investigated for corn yield during 3 years (1998, 1999, and 2001) and for the pooled data of these 3 years. Initially, Back-propagation Neural Network (BPNN) models were developed, including 16 models (4 indices * 4 years including the data from the pooled years) to test for the efficiency determination of those four vegetation indices in corn crop yield prediction. The corn yield was best predicted using BPNN models that used the means and standard deviations of PVI grid images. In all three years, it provided higher prediction accuracies, OPEN ACCESS Remote Sensing 2010, 2 674 coefficient of determination (r), and lower standard error of prediction than the models involving GVI, NDVI, and SAVI image information. The GVI, NDVI, and SAVI models for all three years provided average testing prediction accuracies of 24.26% to 94.85%, 19.36% to 95.04%, and 19.24% to 95.04%, respectively while the PVI models for all three years provided average testing prediction accuracies of 83.50% to 96.04%. The PVI pool model provided better average testing prediction accuracy of 94% with respect to other vegetation models, for which it ranged from 89–93%. Similarly, the PVI pool model provided coefficient of determination (r) value of 0.45 as compared to 0.31–0.37 for other index models. Log10 data transformation technique was used to enhance the prediction ability of the PVI models of years 1998, 1999, and 2001 as it was chosen as the preferred index. Another model (Transformed PVI (Pool)) was developed using the log10 transformed PVI image information to show its global application. The transformed PVI models provided average corn yield prediction accuracies of 90%, 97%, and 98% for years 1998, 1999, and 2001, respectively. The pool PVI transformed model provided as average testing accuracy of 93% along with r value of 0.72 and standard error of prediction of 0.05 t/ha.",
"title": ""
},
{
"docid": "39d6a07bc7065499eb4cb0d8adb8338a",
"text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.",
"title": ""
},
{
"docid": "c78c7b867a74d81afea11456b793cb52",
"text": "The problem of finding conflict-free trajectories for multiple agents of identical circular shape, operating in shared 2D workspace, is addressed in the paper and decoupled, e.g., prioritized, approach is used to solve this problem. Agents’ workspace is tessellated into the square grid on which anyangle moves are allowed, e.g. each agent can move into an arbitrary direction as long as this move follows the straight line segment whose endpoints are tied to the distinct grid elements. A novel any-angle planner based on Safe Interval Path Planning (SIPP) algorithm is proposed to find trajectories for an agent moving amidst dynamic obstacles (other agents) on a grid. This algorithm is then used as part of a prioritized multi-agent planner AA-SIPP(m). On the theoretical side, we show that AA-SIPP(m) is complete under well-defined conditions. On the experimental side, in simulation tests with up to 250 agents involved, we show that our planner finds much better solutions in terms of cost (up to 20%) compared to the planners relying on cardinal moves only.",
"title": ""
},
{
"docid": "39d15901cd5fbd1629d64a165a94c5f5",
"text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.",
"title": ""
},
{
"docid": "a478928c303153172133d805ac35c6cc",
"text": "Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestXray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?",
"title": ""
},
{
"docid": "799517016245ffa33a06795b26e308cc",
"text": "The goal of this ”proyecto fin de carrera” was to produce a review of the face detection and face recognition literature as comprehensive as possible. Face detection was included as a unavoidable preprocessing step for face recogntion, and as an issue by itself, because it presents its own difficulties and challenges, sometimes quite different from face recognition. We have soon recognized that the amount of published information is unmanageable for a short term effort, such as required of a PFC, so in agreement with the supervisor we have stopped at a reasonable time, having reviewed most conventional face detection and face recognition approaches, leaving advanced issues, such as video face recognition or expression invariances, for the future work in the framework of a doctoral research. I have tried to gather much of the mathematical foundations of the approaches reviewed aiming for a self contained work, which is, of course, rather difficult to produce. My supervisor encouraged me to follow formalism as close as possible, preparing this PFC report more like an academic report than an engineering project report.",
"title": ""
},
{
"docid": "d18fc16268e6853cef5002c147ae9827",
"text": "Ant Colony Extended (ACE) is a novel algorithm belonging to the general Ant Colony Optimisation (ACO) framework. Two specific features of ACE are: The division of tasks between two kinds of ants, namely patrollers and foragers, and the implementation of a regulation policy to control the number of each kind of ant during the searching process. This paper explores the performance of ACE in the context of the Travelling Salesman Problem (TSP), a classical combinatorial optimisation problem. The results are compared with the results of two well known ACO algorithms: ACS and MMAS.",
"title": ""
},
{
"docid": "4c004745828100f6ccc6fd660ee93125",
"text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.",
"title": ""
},
{
"docid": "7259530c42f4ba91155284ce909d25a6",
"text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.",
"title": ""
},
{
"docid": "3d93c45e2374a7545c6dff7de0714352",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "9489ca5b460842d5a8a65504965f0bd5",
"text": "This article, based on a tutorial the author presented at ITC 2008, is an overview and introduction to mixed-signal production test. The article focuses on the fundamental techniques and procedures in production test and explores key issues confronting the industry.",
"title": ""
},
{
"docid": "ba1b3fb5f147b5af173e5f643a2794e0",
"text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.",
"title": ""
},
{
"docid": "e1a4e8b8c892f1e26b698cd9fd37c3db",
"text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.",
"title": ""
},
{
"docid": "830588b6ff02a05b4d76b58a3e4e7c44",
"text": "The integration of GIS and multicriteria decision analysis has attracted significant interest over the last 15 years or so. This paper surveys the GISbased multicriteria decision analysis (GIS-MCDA) approaches using a literature review and classification of articles from 1990 to 2004. An electronic search indicated that over 300 articles appeared in refereed journals. The paper provides taxonomy of those articles and identifies trends and developments in GISMCDA.",
"title": ""
},
{
"docid": "8d43d25619bd80d564c7c32d2592c4ac",
"text": "Feature selection and dimensionality reduction are important steps in pattern recognition. In this paper, we propose a scheme for feature selection using linear independent component analysis and mutual information maximization method. The method is theoretically motivated by the fact that the classification error rate is related to the mutual information between the feature vectors and the class labels. The feasibility of the principle is illustrated on a synthetic dataset and its performance is demonstrated using EEG signal classification. Experimental results show that this method works well for feature selection.",
"title": ""
},
{
"docid": "33c89872c2a1e5b1b2417c58af616560",
"text": "We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. [21], reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.",
"title": ""
},
{
"docid": "35a2d7f4b48ffa57951f4c32175dd521",
"text": "This paper introduces the settlement generation competition for Minecraft, the first part of the Generative Design in Minecraft challenge. The settlement generation competition is about creating Artificial Intelligence (AI) agents that can produce functional, aesthetically appealing and believable settlements adapted to a given Minecraft map---ideally at a level that can compete with human created designs. The aim of the competition is to advance procedural content generation for games, especially in overcoming the challenges of adaptive and holistic PCG. The paper introduces the technical details of the challenge, but mostly focuses on what challenges this competition provides and why they are scientifically relevant.",
"title": ""
},
{
"docid": "4958f4a85b531a2d5a846d1f6eb1a5a3",
"text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.",
"title": ""
}
] |
scidocsrr
|
05b68ee6825aa4e1447fe5bd81141832
|
FREE-p: Protecting non-volatile memory against both hard and soft errors
|
[
{
"docid": "cf5cd34ea664a81fabe0460e4e040a2d",
"text": "A novel p-trench phase-change memory (PCM) cell and its integration with a MOSFET selector in a standard 0.18 /spl mu/m CMOS technology are presented. The high-performance capabilities of PCM cells are experimentally investigated and their application in embedded systems is discussed. Write times as low as 10 ns and 20 ns have been measured for the RESET and SET operation, respectively, still granting a 10/spl times/ read margin. The impact of the RESET pulse on PCH cell endurance has been also evaluated. Finally, cell distributions and first statistical endurance measurements on a 4 Mbit MOS demonstrator clearly assess the feasibility of the PCM technology.",
"title": ""
}
] |
[
{
"docid": "20c3bfb61bae83494d7451b083bc2202",
"text": "Peripheral nerve hyperexcitability (PNH) syndromes can be subclassified as primary and secondary. The main primary PNH syndromes are neuromyotonia, cramp-fasciculation syndrome (CFS), and Morvan's syndrome, which cause widespread symptoms and signs without the association of an evident peripheral nerve disease. Their major symptoms are muscle twitching and stiffness, which differ only in severity between neuromyotonia and CFS. Cramps, pseudomyotonia, hyperhidrosis, and some other autonomic abnormalities, as well as mild positive sensory phenomena, can be seen in several patients. Symptoms reflecting the involvement of the central nervous system occur in Morvan's syndrome. Secondary PNH syndromes are generally seen in patients with focal or diffuse diseases affecting the peripheral nervous system. The PNH-related symptoms and signs are generally found incidentally during clinical or electrodiagnostic examinations. The electrophysiological findings that are very useful in the diagnosis of PNH are myokymic and neuromyotonic discharges in needle electromyography along with some additional indicators of increased nerve fiber excitability. Based on clinicopathological and etiological associations, PNH syndromes can also be classified as immune mediated, genetic, and those caused by other miscellaneous factors. There has been an increasing awareness on the role of voltage-gated potassium channel complex autoimmunity in primary PNH pathogenesis. Then again, a long list of toxic compounds and genetic factors has also been implicated in development of PNH. The management of primary PNH syndromes comprises symptomatic treatment with anticonvulsant drugs, immune modulation if necessary, and treatment of possible associated dysimmune and/or malignant conditions.",
"title": ""
},
{
"docid": "94c475dea38adf1f2e3af8b9c7a9bc40",
"text": "The Mining Software Repositories (MSR) research community has grown significantly since the first MSR workshop was held in 2004. As the community continues to broaden its scope and deepens its expertise, it is worthwhile to reflect on the best practices that our community has developed over the past decade of research. We identify these best practices by surveying past MSR conferences and workshops. To that end, we review all 117 full papers published in the MSR proceedings between 2004 and 2012. We extract 268 comments from these papers, and categorize them using a grounded theory methodology. From this evaluation, four high-level themes were identified: data acquisition and preparation, synthesis, analysis, and sharing/replication. Within each theme we identify several common recommendations, and also examine how these recommendations have evolved over the past decade. In an effort to make this survey a living artifact, we also provide a public forum that contains the extracted recommendations in the hopes that the MSR community can engage in a continuing discussion on our evolving best practices.",
"title": ""
},
{
"docid": "188ab32548b91fd1bf1edf34ff3d39d9",
"text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.",
"title": ""
},
{
"docid": "6bba3dc4f75d403f387f40174d085463",
"text": "With the proliferation of wireless devices, wireless networks in various forms have become global information infrastructure and an important part of our daily life, which, at the same time, incur fast escalations of both data volumes and energy demand. In other words, energy-efficient wireless networking is a critical and challenging issue in the big data era. In this paper, we provide a comprehensive survey of recent developments on energy-efficient wireless networking technologies that are effective or promisingly effective in addressing the challenges raised by big data. We categorize existing research into two main parts depending on the roles of big data. The first part focuses on energy-efficient wireless networking techniques in dealing with big data and covers studies in big data acquisition, communication, storage, and computation; while the second part investigates recent approaches based on big data analytics that are promising to enhance energy efficiency of wireless networks. In addition, we identify a number of open issues and discuss future research directions for enhancing energy efficiency of wireless networks in the big data era.",
"title": ""
},
{
"docid": "047007485d6a995f6145aadbc07dca8f",
"text": "Commerce is a rapidly emerging application area of ubiquitous computing. In this paper, we discuss the market forces that make the deployment of ubiquitous commerce infrastructures a priority for grocery retailing. We then proceed to report on a study on consumer perceptions of MyGrocer, a recently developed ubiquitous commerce system. The emphasis of the discussion is on aspects of security, privacy protection and the development of trust; we report on the findings of this study. We adopt the enacted view of technology adoption to interpret some of our findings based on three principles for the development of trust. We expect that this interpretation can help to guide the development of appropriate strategies for the successful deployment of ubiquitous commerce systems.",
"title": ""
},
{
"docid": "210acdd097910d183ce1bcd5aefe5b05",
"text": "Imaging spectroscopy is of growing interest as a new apradiation with matter. Imaging spectroscopy in the solar proach to Earth remote sensing. The Airborne Visible/Inreflected spectrum was conceived for the same objective, frared Imaging Spectrometer (AVIRIS) was the first imbut from the Earth looking and regional perspective aging sensor to measure the solar reflected spectrum from (Fig. 1). Molecules and particles of the land, water and 400 nm to 2500 nm at 10 nm intervals. The calibration atmosphere environments interact with solar energy in accuracy and signal-to-noise of AVIRIS remain unique. the 400–2500 nm spectral region through absorption, reThe AVIRIS system as well as the science research and flection, and scattering processes. Imaging spectrometers applications have evolved significantly in recent years. The in the solar reflected spectrum are developed to measure initial design and upgraded characteristics of the AVIRIS spectra as images in some or all of this portion of this system are described in terms of the sensor, calibration, spectrum. These spectral measurements are used to dedata system, and flight operation. This update on the chartermine constituent composition through the physics and acteristics of AVIRIS provides the context for the science chemistry of spectroscopy for science research and appliresearch and applications that use AVIRIS data acquired cations over the regional scale of the image. in the past several years. Recent science research and apTo pursue the objective of imaging spectroscopy, the plications are reviewed spanning investigations of atmoJet Propulsion Laboratory proposed to design and despheric correction, ecology and vegetation, geology and velop the Airborne Visible/Infrared Imaging Spectromesoils, inland and coastal waters, the atmosphere, snow and ter (AVIRIS) in 1983. AVIRIS first measured spectral ice hydrology, biomass burning, environmental hazards, images in 1987 and was the first imaging spectrometer satellite simulation and calibration, commercial applicato measure the solar reflected spectrum from 400 nm to tions, spectral algorithms, human infrastructure, as well as 2500 nm (Fig. 2). AVIRIS measures upwelling radiance spectral modeling. Elsevier Science Inc., 1998 through 224 contiguous spectral channels at 10 nm intervals across the spectrum. These radiance spectra are measured as images of 11 km width and up to 800 km INTRODUCTION length with 20 m spatial resolution. AVIRIS spectral images are acquired from the Q-bay of a NASA ER-2 airSpectroscopy is used in the laboratory in the disciplines craft from an altitude of 20,000 m. The spectral, radioof physics, chemistry, and biology to investigate material metric, and spatial calibration of AVIRIS is determined properties based on the interaction of electromagnetic in laboratory and monitored inflight each year. More than 4 TB of AVIRIS data have been acquired, and the requested data has been calibrated and distributed to inJet Propulsion Laboratory, California Institute of Technology, Pasadena, California vestigators since the initial flights. Address correspondence to R. O. Green, JPL Mail-Stop 306-438, AVIRIS has measured spectral images for science 4800 Oak Grove Dr., Pasadena, CA 91109-8099. E-mail: rog@gomez. research and applications in every year since 1987. More jpl.nasa.gov Received 24 June 1998; accepted 8 July 1998. than 250 papers and abstracts have been written for the",
"title": ""
},
{
"docid": "df37817424721ef034c7f047d9e301ca",
"text": "We propose a new method for obtaining object candidates in 3D space. Our method requires no learning, has no limitation of object properties such as compactness or symmetry, and therefore produces object candidates using a completely general approach. This method is a simple combination of Selective Search, which is a non-learning-based objectness detector working in 2D images, and a supervoxel segmentation method, which works with 3D point clouds. We made a small but non-trivial modification to supervoxel segmentation; it brings better “seeding” for supervoxels, which produces more proper object candidates as a result. Our experiments using a couple of publicly available RGB-D datasets demonstrated that our method outperformed state-of-the-art methods of generating object proposals in 2D images.",
"title": ""
},
{
"docid": "8cc3fa379153f6918b47e57d8ba8c936",
"text": "Skills in emotional intelligence (EI) help healthcare leaders understand, engage and motivate their team. They are essential for dealing well with conflict and creating workable solutions to complex problems. EI skills are grounded in personal competence, upon which build the skills for social competence, including social awareness and relationship management. The leader’s EI skills strongly impact the culture of the organization. This article lists example strategies for building seventeen key emotional intelligence skills that are the foundations for personal and work success and provides examples of their appropriate use as well as their destructive under-use and over-use. Many examples are those incorporated into our healthcare-related leadership development institutes offered at the University of North Carolina’s Gillings School of Global Public Health.",
"title": ""
},
{
"docid": "3aa9e2758cf06c3487af19c884eae382",
"text": "Exciting developments in eye-wearable technology and its potential industrial applications warrant a thorough understanding of its advantages and drawbacks through empirical evidence. We conducted an experiment to investigate what characteristics of eye-wearable technology impact user performance in machine maintenance, which included a representative set of car maintenance tasks involving Locate, Manipulate, and Compare actions. Participants were asked to follow instructions displayed on one of four technologies: a peripheral eye-wearable display, a central eye-wearable display, a tablet, or a paper manual. We found a significant effect of display position: the peripheral eye-wearable display resulted in longer completion time than the central display; but no effect for hands-free operation. The technology effects were also modulated by different Tasks and Action types. We discuss the human factors implications for designing more effective eye-wearable technology, including display position, issues of monocular display, and how the physical proximity of the technology affects users' reliance level.",
"title": ""
},
{
"docid": "e2988860c1e8b4aebd6c288d37d1ca4e",
"text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.",
"title": ""
},
{
"docid": "1f56fb6b6f21eb95a903190a826da6f6",
"text": "Frustration is used as a criterion for identifying usability problems (UPs) and for rating their severity in a few of the existing severity scales, but it is not operationalized. No research has systematically examined how frustration varies with the severity of UPs. We aimed to address these issues with a hybrid approach, using Self-Assessment Manikin, comments elicited with Cued-Recall Debrief, galvanic skin responses (GSR) and gaze data. Two empirical studies involving a search task with a website known to have UPs were conducted to substantiate findings and improve on the methodological framework, which could facilitate usability evaluation practice. Results showed no correlation between GSR peaks and severity ratings, but GSR peaks were correlated with frustration scores -- a metric we developed. The Peak-End rule was partially verified. The problematic evaluator effect was the limitation as it confounded the severity ratings of UPs. Future work is aimed to control this effect and to develop a multifaceted severity scale.",
"title": ""
},
{
"docid": "cac7822c1a40b406c998449e2664815f",
"text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.",
"title": ""
},
{
"docid": "82ff2197019f2fbe6285349b4ed43ac7",
"text": "OBJECTIVES\nUsing data from a regional census of high school students, we have documented the prevalence of cyberbullying and school bullying victimization and their associations with psychological distress.\n\n\nMETHODS\nIn the fall of 2008, 20,406 ninth- through twelfth-grade students in MetroWest Massachusetts completed surveys assessing their bullying victimization and psychological distress, including depressive symptoms, self-injury, and suicidality.\n\n\nRESULTS\nA total of 15.8% of students reported cyberbullying and 25.9% reported school bullying in the past 12 months. A majority (59.7%) of cyberbullying victims were also school bullying victims; 36.3% of school bullying victims were also cyberbullying victims. Victimization was higher among nonheterosexually identified youths. Victims report lower school performance and school attachment. Controlled analyses indicated that distress was highest among victims of both cyberbullying and school bullying (adjusted odds ratios [AORs] were from 4.38 for depressive symptoms to 5.35 for suicide attempts requiring medical treatment). Victims of either form of bullying alone also reported elevated levels of distress.\n\n\nCONCLUSIONS\nOur findings confirm the need for prevention efforts that address both forms of bullying and their relation to school performance and mental health.",
"title": ""
},
{
"docid": "be7f7d9c6a28b7d15ec381570752de95",
"text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.",
"title": ""
},
{
"docid": "fc779c615e0661c6247998532fee55cc",
"text": "This paper presents a challenge to the community: given a large corpus of written text aligned to its normalized spoken form, train an RNN to learn the correct normalization function. We present a data set of general text where the normalizations were generated using an existing text normalization component of a text-to-speech system. This data set will be released open-source in the near future. We also present our own experiments with this data set with a variety of different RNN architectures. While some of the architectures do in fact produce very good results when measured in terms of overall accuracy, the errors that are produced are problematic, since they would convey completely the wrong message if such a system were deployed in a speech application. On the other hand, we show that a simple FST-based filter can mitigate those errors, and achieve a level of accuracy not achievable by the RNN alone. Though our conclusions are largely negative on this point, we are actually not arguing that the text normalization problem is intractable using an pure RNN approach, merely that it is not going to be something that can be solved merely by having huge amounts of annotated text data and feeding that to a general RNN model. Andwhenwe open-source our data, we will be providing a novel data set for sequenceto-sequence modeling in the hopes that the the community can find better solutions.",
"title": ""
},
{
"docid": "bc0ca1e4f698fff9277e5bbcf8c8b797",
"text": "This paper presents a hybrid method combining a vector fitting (VF) and a global optimization for diagnosing coupled resonator bandpass filters. The method can extract coupling matrix from the measured or electromagnetically simulated admittance parameters (Y -parameters) of a narrow band coupled resonator bandpass filter with losses. The optimization method is used to remove the phase shift effects of the measured or the EM simulated Y -parameters caused by the loaded transmission lines at the input/output ports of a filter. VF is applied to determine the complex poles and residues of the Y -parameters without phase shift. The coupling matrix can be extracted (also called the filter diagnosis) by these complex poles and residues. The method can be used to computer-aided tuning (CAT) of a filter in the stage of this filter design and/or product process to accelerate its physical design. Three application examples illustrate the validity of the proposed method.",
"title": ""
},
{
"docid": "6050bd9f60b92471866d2935d42fce2d",
"text": "As one of the successful forms of using Wisdom of Crowd, crowdsourcing, has been widely used for many human intrinsic tasks, such as image labeling, natural language understanding, market predication and opinion mining. Meanwhile, with advances in pervasive technology, mobile devices, such as mobile phones and tablets, have become extremely popular. These mobile devices can work as sensors to collect multimedia data(audios, images and videos) and location information. This power makes it possible to implement the new crowdsourcing mode: spatial crowdsourcing. In spatial crowdsourcing, a requester can ask for resources related a specific location, the mobile users who would like to take the task will travel to that place and get the data. Due to the rapid growth of mobile device uses, spatial crowdsourcing is likely to become more popular than general crowdsourcing, such as Amazon Turk and Crowdflower. However, to implement such a platform, effective and efficient solutions for worker incentives, task assignment, result aggregation and data quality control must be developed. In this demo, we will introduce gMission, a general spatial crowdsourcing platform, which features with a collection of novel techniques, including geographic sensing, worker detection, and task recommendation. We introduce the sketch of system architecture and illustrate scenarios via several case analysis.",
"title": ""
},
{
"docid": "ddfd19823d6dcfc1bd9c3763ecc30cb0",
"text": "As travelers are becoming more price sensitive, less brand loyal and more sophisticated, Customer Relationship Management (CRM) becomes a strategic necessity for attracting and increasing guests’ patronage. Although CRM in hospitality has overstated the importance of ICT, it is now widely recognised that successful CRM implementation should effectively combine and align ICT functionality with business operations. Given the lack of a widely accepted framework for CRM implementation, this paper proposed a model for managing and integrating ICT capabilities into CRM strategies and business processes. The model argues that successful CRM implementation requires the management and alignment of three managerial processes: ICT, relationship (internal and external) and knowledge management. The model is tested by gathering data from Greek hotels, while findings provide useful practical implications and suggestions for future research. r 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "03f0614b2479fd470eea5ef39c5a93f9",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a r t i c l e i n f o a b s t r a c t Detailed land use/land cover classification at ecotope level is important for environmental evaluation. In this study, we investigate the possibility of using airborne hyperspectral imagery for the classification of ecotopes. In particular, we assess two tree-based ensemble classification algorithms: Adaboost and Random Forest, based on standard classification accuracy, training time and classification stability. Our results show that Adaboost and Random Forest attain almost the same overall accuracy (close to 70%) with less than 1% difference, and both outperform a neural network classifier (63.7%). Random Forest, however, is faster in training and more stable. Both ensemble classifiers are considered effective in dealing with hyperspectral data. Furthermore, two feature selection methods, the out-of-bag strategy and a wrapper approach feature subset selection using the best-first search method are applied. A majority of bands chosen by both methods concentrate between 1.4 and 1.8 μm at the early shortwave infrared region. Our band subset analyses also include the 22 optimal bands between 0.4 and 2.5 μm suggested in Thenkabail et al. (2004). Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sensing of Environment, 91, 354–376.] due to similarity of the target classes. All of the three band subsets considered in this study work well with both classifiers as in most cases the overall accuracy dropped only by less than 1%. A subset of 53 bands is created by combining all feature subsets and comparing to using the entire set the overall accuracy is the same with Adaboost, and with Random Forest, a 0.2% improvement. The strategy to use a basket of band selection methods works better. Ecotopes belonging to the tree classes are in general classified better than the grass classes. Small adaptations of the classification scheme are recommended to improve the applicability of remote sensing method for detailed ecotope mapping. 1. Introduction Land use/land cover classification is a generic tool for environmental monitoring. To measure subtle changes in the ecosystem, a land use/land cover classification at ecotope level with definitive biological and ecological characteristics is needed. Ecotopes are distinct ecological landscape features …",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] |
scidocsrr
|
15548f1bdbf4585bf770b9374151140d
|
SR-SIM: A fast and high performance IQA index based on spectral residual
|
[
{
"docid": "07a1d62b56bd1e2acf4282f69e85fb93",
"text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.",
"title": ""
},
{
"docid": "f0933abbb9df13a12522d87171dae151",
"text": "Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a \"reference\" or \"perfect\" image in some perceptual space. Such \"full-reference\" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for \"human consumption\". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].",
"title": ""
}
] |
[
{
"docid": "f83bf92a38f1ce7734a5c1abce65f92f",
"text": "This paper presents an Adaptive fuzzy logic PID controller for speed control of Brushless Direct current Motor drives which is widely used in various industrial systems, such as servo motor drives, medical, automobile and aerospace industry. BLDC motors were electronically commutated motor offer many advantages over Brushed DC Motor which includes increased efficiency, longer life, low volume and high torque. This paper presents an overview of performance of fuzzy PID controller and Adaptive fuzzy PID controller using Simulink model. Tuning Parameters and computing using Normal PID controller is difficult and also it does not give satisfied control characteristics when compare to Adaptive Fuzzy PID controller. From the Simulation results we verify that Adaptive Fuzzy PID controller give better control performance when compared to fuzzy PID controller. The software Package SIMULINK was used in control and Modelling of BLDC Motor.",
"title": ""
},
{
"docid": "90e76229ff20e253d8d28e09aad432dc",
"text": "Playing online games is experience-oriented but few studies have explored the user’s initial (trial) reaction to game playing and how this further influences a player’s behavior. Drawing upon the Uses and Gratifications theory, we investigated players’ multiple gratifications for playing (i.e. achievement, enjoyment and social interaction) and their experience with the service mechanisms offered after they had played an online game. This study explores the important antecedents of players’ proactive ‘‘stickiness” to a specific online game and examines the relationships among these antecedents. The results show that both the gratifications and service mechanisms significantly affect a player’s continued motivation to play, which is crucial to a player’s proactive stickiness to an online game. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8cc42ad71caac7605648166f9049df8e",
"text": "This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices.",
"title": ""
},
{
"docid": "aeed0f9595c9b40bb03c95d4624dd21c",
"text": "Most research in primary and secondary computing education has focused on understanding learners within formal classroom communities, leaving aside the growing number of promising informal online programming communities where young learners contribute, comment, and collaborate on programs. In this paper, we examined trends in computational participation in Scratch, an online community with over 1 million registered youth designers primarily 11-18 years of age. Drawing on a random sample of 5,000 youth programmers and their activities over three months in early 2012, we examined the quantity of programming concepts used in projects in relation to level of participation, gender, and account age of Scratch programmers. Latent class analyses revealed four unique groups of programmers. While there was no significant link between level of online participation, ranging from low to high, and level of programming sophistication, the exception was a small group of highly engaged users who were most likely to use more complex programming concepts. Groups who only used few of the more sophisticated programming concepts, such as Booleans, variables and operators, were identified as Scratch users new to the site and girls. In the discussion we address the challenges of analyzing young learners' programming in informal online communities and opportunities for designing more equitable computational participation.",
"title": ""
},
{
"docid": "9ffd270f9b674d84403349346d662cf7",
"text": "Predicting the fast-rising young researchers (Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in ∆t years. We explore a series of factors that can drive an author to be fast-rising and design a novel impact increment ranking learning (IIRL) algorithm that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of IIRL. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that the prediction models for different research topics follow the similar pattern. We also find that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.",
"title": ""
},
{
"docid": "ba4637dd5033fa39d1cb09edb42481ec",
"text": "In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program. In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(ƒ). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.",
"title": ""
},
{
"docid": "8528524a102c8fb6f29a4e3f6378ad76",
"text": "Matrix multiplication is a fundamental kernel of many high performance and scientific computing applications. Most parallel implementations use classical O(n3) matrix multiplication, even though there exist algorithms with lower arithmetic complexity. We recently presented a new Communication-Avoiding Parallel Strassen algorithm (CAPS), based on Strassen's fast matrix multiplication, that minimizes communication (SPAA '12). It communicates asymptotically less than all classical and all previous Strassen-based algorithms, and it attains theoretical lower bounds.\n In this paper we show that CAPS is also faster in practice. We benchmark and compare its performance to previous algorithms on Hopper (Cray XE6), Intrepid (IBM BG/P), and Franklin (Cray XT4). We demonstrate significant speedups over previous algorithms both for large matrices and for small matrices on large numbers of processors. We model and analyze the performance of CAPS and predict its performance on future exascale platforms.",
"title": ""
},
{
"docid": "6ef52ad99498d944e9479252d22be9c8",
"text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.",
"title": ""
},
{
"docid": "42325b507cb2529187a870e30ab727f2",
"text": "Most sentence embedding models typically represent each sentence only using word surface, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance representation capability of sentence, we employ conceptualization model to assign associated concepts for each sentence in the text corpus, and then learn conceptual sentence embedding (CSE). Hence, this semantic representation is more expressive than some widely-used text representation models such as latent topic model, especially for short-text. Moreover, we further extend CSE models by utilizing a local attention-based model that select relevant words within the context to make more efficient prediction. In the experiments, we evaluate the CSE models on two tasks, text classification and information retrieval. The experimental results show that the proposed models outperform typical sentence embed-ding models.",
"title": ""
},
{
"docid": "53941b97d7f31af4d3652b7072c1176d",
"text": "This paper presents a new strategy for autonomous navigation of planetary rovers using the fuzzy logic framework and a novel on-board measure of terrain traversabili t y . The navigation strategy is comprised of three simple, independent behaviors with different levels of resolution. The navigation rules for the first behavior, goal-seeking, utilize the global information about the goal position to generate the steering and speed commands that drive the rover to the designated destination. The navigation rules for the second behavior, terrain-traversing, use the regional information about the terrain quality to produce steering and speed commands that guide the rover toward the safest and the most traversable terrain. The inclusion of the regional terrain data (such as slope and roughness) in the rover navigation strategy is a major contribution of this paper. The navigation rules for the third behavior, collision-avoidance, employ the local information about the en-route obstacles to develop steering and speed commands that maneuver the rover around the encountered obstacles. The recommendations of these three behaviors are then integrated through appropriate weighting factors to generate the final control actions for the steering and speed commands that are executed by the rover. The weighting factors are produced b y fuzzy rules that take into account the current status of the rover. The complete rover navigation strategy consists of a total of 37 fuzzy logic rules for the behaviors and their weighting factors. This navigation strategy requires no a priori information about the environment, and uses the on-board traversability analysis to enable the rover to select easy-to-traverse paths autonomously. The Rover Graphical Simulator developed at JPL for test and validation of the navigation rules, as well as for graphical visualization of the rover motion, is described. Three graphical simulation case studies are presented to demonstrate the capabilities of the proposed navigation strategy for planetary rovers. Finally, the navigation algorithm for the Sojourner rover is discussed and compared with the proposed strategy. Simulation studies clearly demonstrate the superior performance of this fuzzy navigation strategy relative to the Sojourner algorithm.",
"title": ""
},
{
"docid": "c4ea83bc1fbddbf13dbe96175a6aec4c",
"text": "Recent work in machine learning and NLP has developed spectral algorithms for many learning tasks involving latent variables. Spectral algorithms rely on singular value decomposition as a basic operation, usually followed by some simple estimation method based on the method of moments. From a theoretical point of view, these methods are appealing in that they offer consistent estimators (and PAC-style guarantees of sample complexity) for several important latent-variable models. This is in contrast to the EM algorithm, which is an extremely successful approach, but which only has guarantees of reaching a local maximum of the likelihood function. From a practical point of view, the methods (unlike EM) have no need for careful initialization, and have recently been shown to be highly efficient (as one example, in work under submission by the authors on learning of latent-variable PCFGs, a spectral algorithm performs at identical accuracy to EM, but is around 20 times faster).",
"title": ""
},
{
"docid": "965472260a2ab6762c8d846040171cfe",
"text": "With growing computing power, physical simulations have become increasingly important in computer graphics. Content creation for movies and interactive computer games relies heavily on physical models, and physicallyinspired interactions have proven to be a great metaphor for shape modeling. This tutorial will acquaint the reader with meshless methods for simulation and modeling. These methods differ from the more common grid or mesh-based methods in that they require less constraints on the spatial discretization. Since the algorithmic structure of simulation algorithms so critically depends on the underlying discretization, we will first treat methods for function approximation from discrete, irregular samples: smoothed particle hydrodynamics and moving least squares. This discussion will include numerical properties as well as complexity considerations. In the second part of this tutorial, we will then treat a number of applications for these approximation schemes. The smoothed particle hydrodynamics framework is used in fluid dynamics and has proven particularly popular in real-time applications. Moving least squares approximations provide higher order consistency, and are therefore suited for the simulation of elastic solids. We will cover both basic elasticity and applications in modeling.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "6a80eb8001380f4d63a8cf3f3693f73c",
"text": "Traditional energy measurement fails to provide support to consumers to make intelligent decisions to save energy. Non-intrusive load monitoring is one solution that provides disaggregated power consumption profiles. Machine learning approaches rely on public datasets to train parameters for their algorithms, most of which only provide low-frequency appliance-level measurements, thus limiting the available feature space for recognition.\n In this paper, we propose a low-cost measurement system for high-frequency energy data. Our work utilizes an off-the-shelf power strip with a voltage-sensing circuit, current sensors, and a single-board PC as data aggregator. We develop a new architecture and evaluate the system in real-world environments. The self-contained unit for six monitored outlets can achieve up to 50 kHz for all signals simultaneously. A simple design and off-the-shelf components allow us to keep costs low. Equipping a building with our measurement systems is more feasible compared to expensive existing solutions. We used the outlined system architecture to manufacture 20 measurement systems to collect energy data over several months of more than 50 appliances at different locations, with an aggregated size of 15 TB.",
"title": ""
},
{
"docid": "a728f834c27b76189fe2fe9bd6f5f7be",
"text": "The era of big data provides researchers with convenient access to copious data. However, we often have little knowledge of such data. The increasing prevalence of massive data is challenging the traditional methods of learning causality because they were developed for the cases with limited amount of data and strong prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of both traditional and frontier methods followed by a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.",
"title": ""
},
{
"docid": "82d3217331a70ead8ec3064b663de451",
"text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"title": ""
},
{
"docid": "b4efebd49c8dd2756a4c2fb86b854798",
"text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.",
"title": ""
},
{
"docid": "8a8d8b029a23d0d20ff9bd40fe0420bc",
"text": "Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action.",
"title": ""
},
{
"docid": "f39abb67a6c392369c5618f5c33d93cf",
"text": "In our research, we view human behavior as a structured sequence of context-sensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a context-sensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the context-sensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling time-usage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.",
"title": ""
},
{
"docid": "65aa27cc08fd1f3532f376b536c452ba",
"text": "Design work and design knowledge in Information Systems (IS) is important for both research and practice. Yet there has been comparatively little critical attention paid to the problem of specifying design theory so that it can be communicated, justified, and developed cumulatively. In this essay we focus on the structural components or anatomy of design theories in IS as a special class of theory. In doing so, we aim to extend the work of Walls, Widemeyer and El Sawy (1992) on the specification of information systems design theories (ISDT), drawing on other streams of thought on design research and theory to provide a basis for a more systematic and useable formulation of these theories. We identify eight separate components of design theories: (1) purpose and scope, (2) constructs, (3) principles of form and function, (4) artifact mutability, (5) testable propositions, (6) justificatory knowledge (kernel theories), (7) principles of implementation, and (8) an expository instantiation. This specification includes components missing in the Walls et al. adaptation of Dubin (1978) and Simon (1969) and also addresses explicitly problems associated with the role of instantiations and the specification of design theories for methodologies and interventions as well as for products and applications. The essay is significant as the unambiguous establishment of design knowledge as theory gives a sounder base for arguments for the rigor and legitimacy of IS as an applied discipline and for its continuing progress. A craft can proceed with the copying of one example of a design artifact by one artisan after another. A discipline cannot.",
"title": ""
}
] |
scidocsrr
|
3ea0abc77eb491408e1780d7ed031f37
|
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors
|
[
{
"docid": "e2b30aa74b60454b14ceb50d6d19f84f",
"text": "In this paper we present a novel algorithm for fast and robust stereo visual odometry based on feature selection and tracking (SOFT). The reduction of drift is based on careful selection of a subset of stable features and their tracking through the frames. Rotation and translation between two consecutive poses are estimated separately. The five point method is used for rotation estimation, whereas the three point method is used for estimating translation. Experimental results show that the proposed algorithm has an average pose error of 1.03% with processing speed above 10 Hz. According to publicly available KITTI leaderboard, SOFT outperforms all other validated methods. We also present a modified IMU-aided version of the algorithm, fast and suitable for embedded systems. This algorithm employs an IMU for outlier rejection and Kalman filter for rotation refinement. Experiments show that the IMU based system runs at 20 Hz on an ODROID U3 ARM-based embedded computer without any hardware acceleration. Integration of all components is described and experimental results are presented.",
"title": ""
}
] |
[
{
"docid": "9af4c955b7c08ca5ffbfabc9681f9525",
"text": "The emergence of deep neural networks (DNNs) as a state-of-the-art machine learning technique has enabled a variety of artificial intelligence applications for image recognition, speech recognition and translation, drug discovery, and machine vision. These applications are backed by large DNN models running in serving mode on a cloud computing infrastructure to process client inputs such as images, speech segments, and text segments. Given the compute-intensive nature of large DNN models, a key challenge for DNN serving systems is to minimize the request response latencies. This paper characterizes the behavior of different parallelism techniques for supporting scalable and responsive serving systems for large DNNs. We identify and model two important properties of DNN workloads: 1) homogeneous request service demand and 2) interference among requests running concurrently due to cache/memory contention. These properties motivate the design of serving deep learning systems fast (SERF), a dynamic scheduling framework that is powered by an interference-aware queueing-based analytical model. To minimize response latency for DNN serving, SERF quickly identifies and switches to the optimal parallel configuration of the serving system by using both empirical and analytical methods. Our evaluation of SERF using several well-known benchmarks demonstrates its good latency prediction accuracy, its ability to correctly identify optimal parallel configurations for each benchmark, its ability to adapt to changing load conditions, and its efficiency advantage (by at least three orders of magnitude faster) over exhaustive profiling. We also demonstrate that SERF supports other scheduling objectives and can be extended to any general machine learning serving system with the similar parallelism properties as above.",
"title": ""
},
{
"docid": "afae94714340326278c1629aa4ecc48c",
"text": "The purpose of this investigation was to examine the influence of upper-body static stretching and dynamic stretching on upper-body muscular performance. Eleven healthy men, who were National Collegiate Athletic Association Division I track and field athletes (age, 19.6 +/- 1.7 years; body mass, 93.7 +/- 13.8 kg; height, 183.6 +/- 4.6 cm; bench press 1 repetition maximum [1RM], 106.2 +/- 23.0 kg), participated in this study. Over 4 sessions, subjects participated in 4 different stretching protocols (i.e., no stretching, static stretching, dynamic stretching, and combined static and dynamic stretching) in a balanced randomized order followed by 4 tests: 30% of 1 RM bench throw, isometric bench press, overhead medicine ball throw, and lateral medicine ball throw. Depending on the exercise, test peak power (Pmax), peak force (Fmax), peak acceleration (Amax), peak velocity (Vmax), and peak displacement (Dmax) were measured. There were no differences among stretch trials for Pmax, Fmax, Amax, Vmax, or Dmax for the bench throw or for Fmax for the isometric bench press. For the overhead medicine ball throw, there were no differences among stretch trials for Vmax or Dmax. For the lateral medicine ball throw, there was no difference in Vmax among stretch trials; however, Dmax was significantly larger (p </= 0.05) for the static and dynamic condition compared to the static-only condition. In general, there was no short-term effect of stretching on upper-body muscular performance in young adult male athletes, regardless of stretch mode, potentially due to the amount of rest used after stretching before the performances. Since throwing performance was largely unaffected by static or dynamic upper-body stretching, athletes competing in the field events could perform upper-body stretching, if enough time were allowed before the performance. However, prior studies on lower-body musculature have demonstrated dramatic negative effects on speed and power. Therefore, it is recommended that a dynamic warm-up be used for the entire warm-up.",
"title": ""
},
{
"docid": "f5e5cc6153577760d776ea95f1fc4f8e",
"text": "This paper proposes a new steganographic scheme relying on the principle of “cover-source switching”, the key idea being that the embedding should switch from one coversource to another. The proposed implementation, called Natural Steganography, considers the sensor noise naturally present in the raw images and uses the principle that, by the addition of a specific noise the steganographic embedding tries to mimic a change of ISO sensitivity. The embedding methodology consists in 1) perturbing the image in the raw domain, 2) modeling the perturbation in the processed domain, 3) embedding the payload in the processed domain. We show that this methodology is easily tractable whenever the processes are known and enables to embed large and undetectable payloads. We also show that already used heuristics such as synchronization of embedding changes or detectability after rescaling can be respectively explained by operations such as color demosaicing and down-scaling kernels.",
"title": ""
},
{
"docid": "2a27ce697a675c1fad1005dbf10220d8",
"text": "The default mode network (DMN) of the brain consists of areas that are typically more active during rest than during active task performance. Recently however, this network has been shown to be activated by certain types of tasks. Social cognition, particularly higher-order tasks such as attributing mental states to others, has been suggested to activate a network of areas at least partly overlapping with the DMN. Here, we explore this claim, drawing on evidence from meta-analyses of functional MRI data and recent studies investigating the structural and functional connectivity of the social brain. In addition, we discuss recent evidence for the existence of a DMN in non-human primates. We conclude by discussing some of the implications of these observations.",
"title": ""
},
{
"docid": "5c6f1cbb9695b95f936af71cf901e887",
"text": "In this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image. From the coloured 3D model obtained by this procedure, we can generate new images of the face across changes in viewpoint and illumination. Moreover, nonrigid transformations which are represented within the flexible model can be applied, for example changes in facial expression. The key problem for generating a flexible face model is the computation of dense correspondence between all given 3D example faces. A new correspondence algorithm is described which is a generalization of common algorithms for optic flow computation to 3D-face data.",
"title": ""
},
{
"docid": "f23ff5a1275911d47459fa9304b4cf7f",
"text": "The input to a neural sequence-tosequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoderdecoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.",
"title": ""
},
{
"docid": "f6528e225928cc5e741c82e70e771441",
"text": "Clouds are rapidly becoming an important platform for scientific applications. In the Cloud environment with uncountable numeric nodes, resource is inevitably unreliable, which has a great effect on task execution and scheduling. In this paper, inspired by Bayesian cognitive model and referring to the trust relationship models of sociology, we first propose a novel Bayesian method based cognitive trust model, and then we proposed a trust dynamic level scheduling algorithm named Cloud-DLS by integrating the existing DLS algorithm. Moreover, a benchmark is structured to span a range of Cloud computing characteristics for evaluation of the proposed method. Theoretical analysis and simulations prove that the Cloud-DLS algorithm can efficiently meet the requirement of Cloud computing workloads in trust, sacrificing fewer time costs, and assuring the execution of tasks in a security way. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "01247caae4c8f4c0ce6193600a7274ea",
"text": "The Oak Ridge Leadership Computing Facility (OLCF) has deployed multiple large-scale parallel file systems (PFS) to support its operations. During this process, OLCF acquired significant expertise in large-scale storage system design, file system software development, technology evaluation, benchmarking, procurement, deployment, and operational practices. Based on the lessons learned from each new PFS deployment, OLCF improved its operating procedures, and strategies. This paper provides an account of our experience and lessons learned in acquiring, deploying, and operating large-scale parallel file systems. We believe that these lessons will be useful to the wider HPC community.",
"title": ""
},
{
"docid": "b71ffe031d4767aa08e2fdb317563bc7",
"text": "Fat-tailed sheep come in various colours—most are either brown (tan) or black. In some, most of the body is white with the tan or black colour restricted to the front portion of the body or to just around the eyes, muzzle and parts of the legs. The Karakul breed is important for the production of lamb skins of various colours for the fashion industry. As well as the black and tan colours there are Karakuls bred for grey or roan shades, a white colour or one of the numerous Sur shades. In the Sur shades, the base of the birthcoat fibre is one of a number of dark shades and the tip a lighter or white shade. All these colours and many others are the result of the interaction of various genes that determine the specifics of the coat colour of the sheep. A number of sets of nomenclature and symbols have been used to represent the various loci and their alleles that are involved. In the 1980s and 1990s, a standardised set, based closely on those of the mouse and other species was developed. Using this as the framework, the alleles of the Extension, Agouti, Brown, Spotting, Pigmented Head and Roan loci are described using fat-tailed sheep (mainly Damara, Karakul and Persian) as examples. Further discussion includes other types of “white markings,” the Ticking locus and the Sur loci.",
"title": ""
},
{
"docid": "ebf31b75aad0eb366959243ab8160131",
"text": "Angiogenesis, the growth of new blood vessels from pre-existing vessels, represents an excellent therapeutic target for the treatment of wound healing and cardiovascular disease. Herein, we report that LPLI (low-power laser irradiation) activates ERK/Sp1 (extracellular signal-regulated kinase/specificity protein 1) pathway to promote VEGF expression and vascular endothelial cell proliferation. We demonstrate for the first time that LPLI enhances DNA-binding and transactivation activity of Sp1 on VEGF promoter in vascular endothelial cells. Moreover, Sp1-regulated transcription is in an ERK-dependent manner. Activated ERK by LPLI translocates from cytoplasm to nuclear and leads to increasing interaction with Sp1, triggering a progressive phosphorylation of Sp1 on Thr453 and Thr739, resulting in the upregulation of VEGF expression. Furthermore, selective inhibition of Sp1 by mithramycin-A or shRNA suppresses the promotion effect of LPLI on cell cycle progression and proliferation, which is also significantly abolished by inhibition of ERK activity. These findings highlight the important roles of ERK/Sp1 pathway in angiogenesis, supplying potential strategy for angiogenesis-related diseases with LPLI treatment.",
"title": ""
},
{
"docid": "cf79cd1f110e2539697390e37e48b8d8",
"text": "This paper investigates an application of mobile sensing: detecting and reporting the surface conditions of roads. We describe a system and associated algorithms to monitor this important civil infrastructure using a collection of sensor-equipped vehicles. This system, which we call the Pothole Patrol (P2), uses the inherent mobility of the participating vehicles, opportunistically gathering data from vibration and GPS sensors, and processing the data to assess road surface conditions. We have deployed P2 on 7 taxis running in the Boston area. Using a simple machine-learning approach, we show that we are able to identify potholes and other severe road surface anomalies from accelerometer data. Via careful selection of training data and signal features, we have been able to build a detector that misidentifies good road segments as having potholes less than 0.2% of the time. We evaluate our system on data from thousands of kilometers of taxi drives, and show that it can successfully detect a number of real potholes in and around the Boston area. After clustering to further reduce spurious detections, manual inspection of reported potholes shows that over 90% contain road anomalies in need of repair.",
"title": ""
},
{
"docid": "f92a71e6094000ecf47ebd02bf4e5c4a",
"text": "Exploding amounts of multimedia data increasingly require automatic indexing and classification, e.g. training classifiers to produce high-level features, or semantic concepts, chosen to represent image content, like car, person, etc. When changing the applied domain (i.e. from news domain to consumer home videos), the classifiers trained in one domain often perform poorly in the other domain due to changes in feature distributions. Additionally, classifiers trained on the new domain alone may suffer from too few positive training samples. Appropriately adapting data/models from an old domain to help classify data in a new domain is an important issue. In this work, we develop a new cross-domain SVM (CDSVM) algorithm for adapting previously learned support vectors from one domain to help classification in another domain. Better precision is obtained with almost no additional computational cost. Also, we give a comprehensive summary and comparative study of the state- of-the-art SVM-based cross-domain learning methods. Evaluation over the latest large-scale TRECVID benchmark data set shows that our CDSVM method can improve mean average precision over 36 concepts by 7.5%. For further performance gain, we also propose an intuitive selection criterion to determine which cross-domain learning method to use for each concept.",
"title": ""
},
{
"docid": "412951e42529d7862cb0bcbaf5bd9f97",
"text": "Wireless Sensor Network is an emerging field which is accomplishing much importance because of its vast contribution in varieties of applications. Wireless Sensor Networks are used to monitor a given field of interest for changes in the environment. Coverage is one of the main active research interests in WSN.In this paper we aim to review the coverage problem In WSN and the strategies that are used in solving coverage problem in WSN.These strategies studied are used during deployment phase of the network. Besides this we also outlined some basic design considerations in coverage of WSN.We also provide a brief summary of various coverage issues and the various approaches for coverage in Sensor network. Keywords— Coverage; Wireless sensor networks: energy efficiency; sensor; area coverage; target Coverage.",
"title": ""
},
{
"docid": "4aec63cb23b43f4d1d2f7ab53cedbff9",
"text": "Presently, there is no recommendation on how to assess functional status of chronic obstructive pulmonary disease (COPD) patients. This study aimed to summarize and systematically evaluate these measures.Studies on measures of COPD patients' functional status published before the end of January 2015 were included using a search filters in PubMed and Web of Science, screening reference lists of all included studies, and cross-checking against some relevant reviews. After title, abstract, and main text screening, the remaining was appraised using the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) 4-point checklist. All measures from these studies were rated according to best-evidence synthesis and the best-rated measures were selected.A total of 6447 records were found and 102 studies were reviewed, suggesting 44 performance-based measures and 14 patient-reported measures. The majority of the studies focused on internal consistency, reliability, and hypothesis testing, but only 21% of them employed good or excellent methodology. Their common weaknesses include lack of checks for unidimensionality, inadequate sample sizes, no prior hypotheses, and improper methods. On average, patient-reported measures perform better than performance-based measures. The best-rated patient-reported measures are functional performance inventory (FPI), functional performance inventory short form (FPI-SF), living with COPD questionnaire (LCOPD), COPD activity rating scale (CARS), University of Cincinnati dyspnea questionnaire (UCDQ), shortness of breath with daily activities (SOBDA), and short-form pulmonary functional status scale (PFSS-11), and the best-rated performance-based measures are exercise testing: 6-minute walk test (6MWT), endurance treadmill test, and usual 4-meter gait speed (usual 4MGS).Further research is needed to evaluate the reliability and validity of performance-based measures since present studies failed to provide convincing evidence. FPI, FPI-SF, LCOPD, CARS, UCDQ, SOBDA, PFSS-11, 6MWT, endurance treadmill test, and usual 4MGS performed well and are preferable to assess functional status of COPD patients.",
"title": ""
},
{
"docid": "bb709a5fd20c517769312787b82911b8",
"text": "Over the past decade, technology has become increasingly important in the lives of adolescents. As a group, adolescents are heavy users of newer electronic communication forms such as instant messaging, e-mail, and text messaging, as well as communication-oriented Internet sites such as blogs, social networking, and sites for sharing photos and videos. Kaveri Subrahmanyam and Patricia Greenfield examine adolescents' relationships with friends, romantic partners, strangers, and their families in the context of their online communication activities. The authors show that adolescents are using these communication tools primarily to reinforce existing relationships, both with friends and romantic partners. More and more they are integrating these tools into their \"offline\" worlds, using, for example, social networking sites to get more information about new entrants into their offline world. Subrahmanyam and Greenfield note that adolescents' online interactions with strangers, while not as common now as during the early years of the Internet, may have benefits, such as relieving social anxiety, as well as costs, such as sexual predation. Likewise, the authors demonstrate that online content itself can be both positive and negative. Although teens find valuable support and information on websites, they can also encounter racism and hate messages. Electronic communication may also be reinforcing peer communication at the expense of communication with parents, who may not be knowledgeable enough about their children's online activities on sites such as the enormously popular MySpace. Although the Internet was once hailed as the savior of education, the authors say that schools today are trying to control the harmful and distracting uses of electronic media while children are at school. The challenge for schools is to eliminate the negative uses of the Internet and cell phones in educational settings while preserving their significant contributions to education and social connection.",
"title": ""
},
{
"docid": "63fef6099108f7990da0a7687e422e14",
"text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.",
"title": ""
},
{
"docid": "e4c4a0f2bf476892794aebd79c0f05cc",
"text": "Switched reluctance motors (SRMs) have been gaining increasing popularity and emerging as an attractive alternative to traditional electrical motors in hybrid vehicle applications due to their numerous advantages. However, large torque ripple and acoustic noise are its major disadvantages. This paper presents a novel five-phase 15/12 SRM which features higher power density, very low level of vibration with flexibility in controlling the torque ripple profile. This design is classified as an axial field SRM and hence it needs three-dimensional finite-element analysis model. However, an alternative two-dimensional model is presented and some design features and result are discussed in this paper.",
"title": ""
},
{
"docid": "41578beecf11233752833d9152109174",
"text": "In this paper we introduce the idea of improving the performance of parametric temporaldifference (TD) learning algorithms by selectively emphasizing or de-emphasizing their updates on different time steps. In particular, we show that varying the emphasis of linear TD(λ)’s updates in a particular way causes its expected update to become stable under off-policy training. The only prior model-free TD methods to achieve this with per-step computation linear in the number of function approximation parameters are the gradientTD family of methods including TDC, GTD(λ), and GQ(λ). Compared to these methods, our emphatic TD(λ) is simpler and easier to use; it has only one learned parameter vector and one step-size parameter. Our treatment includes general state-dependent discounting and bootstrapping functions, and a way of specifying varying degrees of interest in accurately valuing different states.",
"title": ""
},
{
"docid": "6e44c8087c82e2adce968bf97d2e7dc6",
"text": "We propose an algorithm that is based on the Ant Colony Optimization (ACO) metaheuristic for producing harmonized melodies. The algorithm works in two stages. In the first stage it creates a melody. This melody is then harmonized according to the rules of Baroque harmony in the second stage. This is the first ACO algorithm to create music that uses domain knowledge and the first employed for harmonization of a melody.",
"title": ""
},
{
"docid": "a895b7888b15e49a2140bcea9c20e0b9",
"text": "Deep convolutional neural networks (DNNs) have brought significant performance improvements to face recognition. However the training can hardly be carried out on mobile devices because the training of these models requires much computational power. An individual user with the demand of deriving DNN models from her own datasets usually has to outsource the training procedure onto a cloud or edge server. However this outsourcing method violates privacy because it exposes the users’ data to curious service providers. In this paper, we utilize the differentially private mechanism to enable the privacy-preserving edge based training of DNN face recognition models. During the training, DNN is split between the user device and the edge server in a way that both private data and model parameters are protected, with only a small cost of local computations. We show that our mechanism is capable of training models in different scenarios, e.g., from scratch, or through finetuning over existed models.",
"title": ""
}
] |
scidocsrr
|
d75cf922e9d16103f54658fa33352c86
|
Distributed Data Streams
|
[
{
"docid": "872f556cb441d9c8976e2bf03ebd62ee",
"text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.",
"title": ""
},
{
"docid": "7bdc7740124adab60c726710a003eb87",
"text": "We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.",
"title": ""
}
] |
[
{
"docid": "5f5960cf7621f95687cbbac48dfdb0c5",
"text": "We present the first controller that allows our small hexapod robot, RHex, to descend a wide variety of regular sized, “real-world” stairs. After selecting one of two sets of trajectories, depending on the slope of the stairs, our open-loop, clock-driven controllers require no further operator input nor task level feedback. Energetics for stair descent is captured via specific resistance values and compared to stair ascent and other behaviors. Even though the algorithms developed and validated in this paper were developed for a particular robot, the basic motion strategies, and the phase relationships between the contralateral leg pairs are likely applicable to other hexapod robots of similar size as well.",
"title": ""
},
{
"docid": "476c1e503065f3d1638f6f2302dc6bbb",
"text": "The increasing popularity and ubiquity of various large graph datasets has caused renewed interest for graph partitioning. Existing graph partitioners either scale poorly against large graphs or disregard the impact of the underlying hardware topology. A few solutions have shown that the nonuniform network communication costs may affect the performance greatly. However, none of them considers the impact of resource contention on the memory subsystems (e.g., LLC and Memory Controller) of modern multicore clusters. They all neglect the fact that the bandwidth of modern high-speed networks (e.g., Infiniband) has become comparable to that of the memory subsystems. In this paper, we provide an in-depth analysis, both theoretically and experimentally, on the contention issue for distributed workloads. We found that the slowdown caused by the contention can be as high as 11x. We then design an architecture-aware graph partitioner, Argo, to allow the full use of all cores of multicore machines without suffering from either the contention or the communication heterogeneity issue. Our experimental study showed (1) the effectiveness of Argo, achieving up to 12x speedups on three classic workloads: Breadth First Search, Single Source Shortest Path, and PageRank; and (2) the scalability of Argo in terms of both graph size and the number of partitions on two billion-edge real-world graphs.",
"title": ""
},
{
"docid": "0186c053103d06a8ddd054c3c05c021b",
"text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.",
"title": ""
},
{
"docid": "6087e066b04b9c3ac874f3c58979f89a",
"text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.",
"title": ""
},
{
"docid": "8a9603a10e5e02f6edfbd965ee11bbb9",
"text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.",
"title": ""
},
{
"docid": "029cca0b7e62f9b52e3d35422c11cea4",
"text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.",
"title": ""
},
{
"docid": "6fb48ddc2f14cdb9371aad67e9c8abe0",
"text": "Being able to predict the course of arbitrary chemical react ions is essential to the theory and applications of organic chemistry. Previous app roaches are not highthroughput, are not generalizable or scalable, or lack suffi cient data to be effective. We describe single mechanistic reactions as concerted elec tron movements from an electron orbital source to an electron orbital sink. We us e an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and6.14 million non-productive mechanistic steps. We then pose ide nt fying productive mechanistic steps as a ranking problem: rank potent ial orbital interactions such that the top ranked interactions yield the major produc ts. The machine learning implementation follows a two-stage approach, in which w e first train atom level reactivity filters to prune94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking mo dels n pairs of interacting orbitals to learn a relative productivity func tion over single mechanistic reactions in a given system. Without the use of explicit t ransformation patterns, the ensemble perfectly ranks the productive mechanisms at t he top89.1% of the time, rising to99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allow s multi-step reaction prediction. Furthermore, it is generalizable, making reas on ble predictions over reactants and conditions which the rule-based expert syste m does not handle.",
"title": ""
},
{
"docid": "5ddcfb5404ceaffd6957fc53b4b2c0d8",
"text": "A router's main function is to allow communication between different networks as quickly as possible and in efficient manner. The communication can be between LAN or between LAN and WAN. A firewall's function is to restrict unwanted traffic. In big networks, routers and firewall tasks are performed by different network devices. But in small networks, we want both functions on same device i.e. one single device performing both routing and firewalling. We call these devices as routing firewall. In Traditional networks, the devices are already available. But the next generation networks will be powered by Software Defined Networks. For wide adoption of SDN, we need northbound SDN applications such as routers, load balancers, firewalls, proxy servers, Deep packet inspection devices, routing firewalls running on OpenFlow based physical and virtual switches. But the SDN is still in early stage, so still there is very less availability of these applications. There already exist simple L3 Learning application which provides very elementary router function and also simple stateful firewalls providing basic access control. In this paper, we are implementing one SDN Routing Firewall Application which will perform both the routing and firewall function.",
"title": ""
},
{
"docid": "6b1adc1da6c75f6cc0cb17820add8ef1",
"text": "Many different classification tasks need to manage structured data, which are usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that the vertices/edges of each graph may change during time. Our goal is to jointly exploit structured data and temporal information through the use of a neural network model. To the best of our knowledge, this task has not been addressed using these kind of architectures. For this reason, we propose two novel approaches, which combine Long Short-Term Memory networks and Graph Convolutional Networks to learn long short-term dependencies together with graph structure. The quality of our methods is confirmed by the promising results achieved.",
"title": ""
},
{
"docid": "e0f0ccb0e1c2f006c5932f6b373fb081",
"text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.",
"title": ""
},
{
"docid": "e95541d0401a196b03b94dd51dd63a4b",
"text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "b71197073ea33bb8c61973e8cd7d2775",
"text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.",
"title": ""
},
{
"docid": "2a1d77e0c5fe71c3c5eab995828ef113",
"text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.",
"title": ""
},
{
"docid": "9b0114697dc6c260610d0badc1d7a2a4",
"text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.",
"title": ""
},
{
"docid": "7bfbcf62f9ff94e80913c73e069ace26",
"text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.",
"title": ""
},
{
"docid": "90d9360a3e769311a8d7611d8c8845d9",
"text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"title": ""
},
{
"docid": "6ddb475ef1529ab496ab9f40dc51cb99",
"text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.",
"title": ""
},
{
"docid": "9d5c258e4a2d315d3e462ab333f3a6df",
"text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.",
"title": ""
}
] |
scidocsrr
|
5464eda4baec792897d13e706bc05479
|
Barzilai-Borwein Step Size for Stochastic Gradient Descent
|
[
{
"docid": "34459005eaf3a5e5bc9e467ecdf9421c",
"text": "for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the l1-norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to μ. This code exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
}
] |
[
{
"docid": "0666baa7be39ef1887c7f8ce04aaa957",
"text": "BACKGROUND\nEnsuring health worker job satisfaction and motivation are important if health workers are to be retained and effectively deliver health services in many developing countries, whether they work in the public or private sector. The objectives of the paper are to identify important aspects of health worker satisfaction and motivation in two Indian states working in public and private sectors.\n\n\nMETHODS\nCross-sectional surveys of 1916 public and private sector health workers in Andhra Pradesh and Uttar Pradesh, India, were conducted using a standardized instrument to identify health workers' satisfaction with key work factors related to motivation. Ratings were compared with how important health workers consider these factors.\n\n\nRESULTS\nThere was high variability in the ratings for areas of satisfaction and motivation across the different practice settings, but there were also commonalities. Four groups of factors were identified, with those relating to job content and work environment viewed as the most important characteristics of the ideal job, and rated higher than a good income. In both states, public sector health workers rated \"good employment benefits\" as significantly more important than private sector workers, as well as a \"superior who recognizes work\". There were large differences in whether these factors were considered present on the job, particularly between public and private sector health workers in Uttar Pradesh, where the public sector fared consistently lower (P < 0.01). Discordance between what motivational factors health workers considered important and their perceptions of actual presence of these factors were also highest in Uttar Pradesh in the public sector, where all 17 items had greater discordance for public sector workers than for workers in the private sector (P < 0.001).\n\n\nCONCLUSION\nThere are common areas of health worker motivation that should be considered by managers and policy makers, particularly the importance of non-financial motivators such as working environment and skill development opportunities. But managers also need to focus on the importance of locally assessing conditions and managing incentives to ensure health workers are motivated in their work.",
"title": ""
},
{
"docid": "126b62a0ae62c76b43b4fb49f1bf05cd",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "7c593a9fc4de5beb89022f7d438ffcb8",
"text": "The design of a low power low drop out voltage regulator with no off-chip capacitor and fast transient responses is presented in this paper. The LDO regulator uses a combination of a low power operational trans-conductance amplifier and comparators to drive the gate of the PMOS pass element. The amplifier ensures stability and accurate setting of the output voltage in addition to power supply rejection. The comparators ensure fast response of the regulator to any load or line transients. A settling time of less than 200ns is achieved in response to a load transient step of 50mA with a rise time of 100ns with an output voltage spike of less than 200mV at an output voltage of 3.25 V. A line transient step of 1V with a rise time of 100ns results also in a settling time of less than 400ns with a voltage spike of less than 100mV when the output voltage is 2.6V. The regulator is fabricated using a standard 0.35μm CMOS process and consumes a quiescent current of only 26 μA.",
"title": ""
},
{
"docid": "5d1059849fccf79d87be7df722475d8f",
"text": "This study provides operational guidance for using naïve Bayes Bayesian network (BN) models in bankruptcy prediction. First, we suggest a heuristic method that guides the selection of bankruptcy predictors from a pool of potential variables. The method is based upon the assumption that the joint distribution of the variables is multivariate normal. Variables are selected based upon correlations and partial correlations information. A naïve Bayes model is developed using the proposed heuristic method and is found to perform well based upon a tenfold analysis, for both samples with complete information and samples with incomplete information. Second, we analyze whether the number of states into which continuous variables are discretized has an impact on a naïve Bayes model performance in bankruptcy prediction. We compare the model’s performance when continuous variables are discretized into two, three, ..., ten, fifteen, and twenty states. Based upon a relatively large training sample, our results show that the naïve Bayes model’s performance increases when the number of states for discretization increases from two to three, and from three to four. Surprisingly, when the number of states increases to more than four, the model’s overall performance neither increases nor decreases. It is possible that the relative large size of training sample used by this study prevents the phenomenon of over fitting from occurring. Finally, we experiment whether modeling continuous variables with continuous distributions instead of discretizing them can improve the naïve Bayes model’s performance. Our finding suggests that this is not true. One possible reason is that continuous distributions tested by this study do not represent well the underlying distributions of empirical data. More importantly, some results of this study could also benefit the implementation of naïve Bayes models in business decision contexts other than bankruptcy prediction.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
},
{
"docid": "0fc5441a3e8589b1bd15d56830c4ef79",
"text": "DevOps is an emerging paradigm to actively foster the collaboration between system developers and operations in order to enable efficient end-to-end automation of software deployment and management processes. DevOps is typically combined with Cloud computing, which enables rapid, on-demand provisioning of underlying resources such as virtual servers, storage, or database instances using APIs in a self-service manner. Today, an ever-growing amount of DevOps tools, reusable artifacts such as scripts, and Cloud services are available to implement DevOps automation. Thus, informed decision making on the appropriate approach (es) for the needs of an application is hard. In this work we present a collaborative and holistic approach to capture DevOps knowledge in a knowledgebase. Beside the ability to capture expert knowledge and utilize crowd sourcing approaches, we implemented a crawling framework to automatically discover and capture DevOps knowledge. Moreover, we show how this knowledge is utilized to deploy and operate Cloud applications.",
"title": ""
},
{
"docid": "8c853251e0fb408c829e6f99a581d4cf",
"text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.",
"title": ""
},
{
"docid": "c65050bb98a071fa8b60fa262536a476",
"text": "Proliferative periostitis is a pathologic lesion that displays an osteo-productive and proliferative inflammatory response of the periosteum to infection or other irritation. This lesion is a form of chronic osteomyelitis that is often asymptomatic, occurring primarily in children, and found only in the mandible. The lesion can be odontogenic or non-odontogenic in nature. A 12 year-old boy presented with an unusual odontogenic proliferative periostitis that originated from the lower left first molar, however, the radiographic radiolucent area and proliferative response were discovered at the apices of the lower left second molar. The periostitis was treated by single-visit non-surgical endodontic treatment of lower left first molar without antibiotic therapy. The patient has been recalled regularly; the lesion had significantly reduced in size 3-months postoperatively. Extraoral symmetry occurred at approximately one year recall. At the last visit, 2 years after initial treatment, no problems or signs of complications have occurred; the radiographic examination revealed complete resolution of the apical lesion and apical closure of the lower left second molar. Odontogenic proliferative periostitis can be observed at the adjacent normal tooth. Besides, this case demonstrates that non-surgical endodontics is a viable treatment option for management of odontogenic proliferative periostitis.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "ca3a0e7bca08fc943d432179766f4ccf",
"text": "BACKGROUND\nMost errors in a clinical chemistry laboratory are due to preanalytical errors. Preanalytical variability of biospecimens can have significant effects on downstream analyses, and controlling such variables is therefore fundamental for the future use of biospecimens in personalized medicine for diagnostic or prognostic purposes.\n\n\nCONTENT\nThe focus of this review is to examine the preanalytical variables that affect human biospecimen integrity in biobanking, with a special focus on blood, saliva, and urine. Cost efficiency is discussed in relation to these issues.\n\n\nSUMMARY\nThe quality of a study will depend on the integrity of the biospecimens. Preanalytical preparations should be planned with consideration of the effect on downstream analyses. Currently such preanalytical variables are not routinely documented in the biospecimen research literature. Future studies using biobanked biospecimens should describe in detail the preanalytical handling of biospecimens and analyze and interpret the results with regard to the effects of these variables.",
"title": ""
},
{
"docid": "28a69b2e02ca56c6ca867749b2129295",
"text": "The popular view of software engineering focuses on managing teams of people to produce large systems. This paper addresses a different angle of software engineering, that of development for re-use and portability. We consider how an essential part of most software products - the user interface - can be successfully engineered so that it can be portable across multiple platforms and on multiple devices. Our research has identified the structure of the problem domain, and we have filled in some of the answers. We investigate promising solutions from the model-driven frameworks of the 1990s, to modern XML-based specification notations (Views, XUL, XIML, XAML), multi-platform toolkits (Qt and Gtk), and our new work, Mirrors which pioneers reflective libraries. The methodology on which Views and Mirrors is based enables existing GUI libraries to be transported to new operating systems. The paper also identifies cross-cutting challenges related to education, standardization and the impact of mobile and tangible devices on the future design of UIs. This paper seeks to position user interface construction as an important challenge in software engineering, worthy of ongoing research.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "8c221ad31eda07f1628c3003a8c12724",
"text": "This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "891bf46e2ad56387c4cf250ad3f0af08",
"text": "r 200 3 lmaden. Summary The creation of value is the core purpose and central process of economic exchange. Traditional models of value creation focus on the firm’s output and price. We present an alternative perspective, one representing the intersection of two growing streams of thought, service science and service-dominant (S-D) logic. We take the view that (1) service, the application of competences (such as knowledge and skills) by one party for the benefit of another, is the underlying basis of exchange; (2) the proper unit of analysis for service-for-service exchange is the service system, which is a configuration of resources (including people, information, and technology) connected to other systems by value propositions; and (3) service science is the study of service systems and of the cocreation of value within complex configurations of resources. We argue that value is fundamentally derived and determined in use – the integration and application of resources in a specific context – rather than in exchange – embedded in firm output and captured by price. Service systems interact through mutual service exchange relationships, improving the adaptability and survivability of all service systems engaged in exchange, by allowing integration of resources that are mutually beneficial. This argument has implications for advancing service science by identifying research questions regarding configurations and processes of value co-creation and measurements of value-in-use, and by developing its ties with economics and other service-oriented disciplines. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c0d722d72955dd1ec6df3cc24289979f",
"text": "Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information.",
"title": ""
},
{
"docid": "2802db74e062103d45143e8e9ad71890",
"text": "Maritime traffic monitoring is an important aspect of safety and security, particularly in close to port operations. While there is a large amount of data with variable quality, decision makers need reliable information about possible situations or threats. To address this requirement, we propose extraction of normal ship trajectory patterns that builds clusters using, besides ship tracing data, the publicly available International Maritime Organization (IMO) rules. The main result of clustering is a set of generated lanes that can be mapped to those defined in the IMO directives. Since the model also takes non-spatial attributes (speed and direction) into account, the results allow decision makers to detect abnormal patterns - vessels that do not obey the normal lanes or sail with higher or lower speeds.",
"title": ""
},
{
"docid": "bfa87a59940f6848d8d5b53b89c16735",
"text": "The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.",
"title": ""
}
] |
scidocsrr
|
4360fb6e4757e246db5d745118379351
|
A Software Engineering Perspective on SDN Programmability
|
[
{
"docid": "11ecb3df219152d33020ba1c4f8848bb",
"text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.",
"title": ""
},
{
"docid": "92d047856fdf20b41c4f673aae2ced66",
"text": "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.",
"title": ""
}
] |
[
{
"docid": "d501d2758e600c307e41a329222bf7d6",
"text": "Placebo effects are beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug. They are mediated by diverse processes — including learning, expectations and social cognition — and can influence various clinical and physiological outcomes related to health. Emerging neuroscience evidence implicates multiple brain systems and neurochemical mediators, including opioids and dopamine. We present an empirical review of the brain systems that are involved in placebo effects, focusing on placebo analgesia, and a conceptual framework linking these findings to the mind–brain processes that mediate them. This framework suggests that the neuropsychological processes that mediate placebo effects may be crucial for a wide array of therapeutic approaches, including many drugs.",
"title": ""
},
{
"docid": "117590d8d7a9c4efb9a19e4cd3e220fc",
"text": "We present in this paper the language NoFun for stating component quality in the framework of the ISO/IEC quality standards. The language consists of three different parts. In the first one, software quality characteristics and attributes are defined, probably in a hiera rchical manner. As part of this definition, abstract quality models can be formulated and fu rther refined into more specialised ones. In the second part, values are assigned to component quality basic attributes. In the third one, quality requirements can be stated over components, both context-free (universal quality properties) and context-dependent (quality properties for a given framework -software domain, company, project, etc.). Last, we address to the translation of the language to UML, using its extension mechanisms for capturing the fundamental non-functional concepts.",
"title": ""
},
{
"docid": "54c9c1323a03f0ef3af5eea204fd51ce",
"text": "The fabrication and characterization of magnetic sensors consisting of double magnetic layers are described. Both thin film based material and wire based materials were used for the double layers. The sensor elements were fabricated by patterning NiFe/CoFe multilayer thin films. This thin film based sensor exhibited a constant output voltage per excitation magnetic field at frequencies down to 0.1 Hz. The magnetic sensor using a twisted FeCoV wire, the conventional material for the Wiegand effect, had the disadvantage of an asymmetric output voltage generated by an alternating magnetic field. It was found that the magnetic wire whose ends were both slightly etched exhibited a symmetric output voltage.",
"title": ""
},
{
"docid": "0d5ca0e11363cae0b4d7f335cf832e24",
"text": "This paper presents an investigation into two fuzzy association rule mining models for enhancing prediction performance. The first model (the FCM-Apriori model) integrates Fuzzy C-Means (FCM) and the Apriori approach for road traffic performance prediction. FCM is used to define the membership functions of fuzzy sets and the Apriori approach is employed to identify the Fuzzy Association Rules (FARs). The proposed model extracts knowledge from a database for a Fuzzy Inference System (FIS) that can be used in prediction of a future value. The knowledge extraction process and the performance of the model are demonstrated through two case studies of road traffic data sets with different sizes. The experimental results show the merits and capability of the proposed KD model in FARs based knowledge extraction. The second model (the FCM-MSapriori model) integrates FCM and a Multiple Support Apriori (MSapriori) approach to extract the FARs. These FARs provide the knowledge base to be utilized within the FIS for prediction evaluation. Experimental results have shown that the FCM-MSapriori model predicted the future values effectively and outperformed the FCM-Apriori model and other models reported in the literature.",
"title": ""
},
{
"docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99",
"text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.",
"title": ""
},
{
"docid": "90738b84c4db0a267c7213c923368e6a",
"text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.",
"title": ""
},
{
"docid": "5694ebf4c1f1e0bf65dd7401d35726ed",
"text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.",
"title": ""
},
{
"docid": "50a85588765fa36832690c5998311c6b",
"text": "In this paper, we introduce some storage schemes for multi-dimensional sparse arrays (MDSAs) that handle the sparsity of the array with two primary goals; reducing the storage overhead and maintaining efficient data element access. Four schemes are proposed. These are: i.) The PATRICIA trie compressed storage method (PTCS) which uses PATRICIA trie to store the valid non-zero array elements; ii.)The extended compressed row storage (xCRS) which extends CRS method for sparse matrix storage to sparse arrays of higher dimensions and achieves the best data element access efficiency of all the methods; iii.) The bit encoded xCRS (BxCRS) which optimizes the storage utilization of xCRS by applying data compression methods with run length encoding, while maintaining its data access efficiency; and iv.) a hybrid approach that provides a desired balance between the storage utilization and data manipulation efficiency by combining xCRS and the Bit Encoded Sparse Storage (BESS). These storage schemes were evaluated and compared on three basic array operations; constructing the storage scheme, accessing a random element and retrieving a sub-array, using a set of synthetic sparse multi-dimensional arrays.",
"title": ""
},
{
"docid": "359d3e06c221e262be268a7f5b326627",
"text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.",
"title": ""
},
{
"docid": "c4421784554095ffed1365b3ba41bdc0",
"text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences",
"title": ""
},
{
"docid": "d56968c0512526ea891f0f031b99db04",
"text": "Naive-Bayes and k-NN classifiers are two machine learning approaches for text classification. Rocchio is the classic method for text classification in information retrieval. Based on these three approaches and using classifier fusion methods, we propose a novel approach in text classification. Our approach is a supervised method, meaning that the list of categories should be defined and a set of training data should be provided for training the system. In this approach, documents are represented as vectors where each component is associated with a particular word. We proposed voting methods and OWA operator and decision template method for combining classifiers. Experimental results show that these methods decrese the classification error 15 percent as measured on 2000 training data from 20 newsgroups dataset.",
"title": ""
},
{
"docid": "a8b58ed956f7accad4d0de32641cd66a",
"text": "Nowadays people uses online social networking sites for communications with others. People post their message on social network sites. The messages are of various types such as text, images, audio and video. Sometime for these messages the people post their URL and request their friend to visit that site to show the messages. The fraud user uses malicious URL and post on social networking sites. These malicious URL contains viruses which harms user system. Malware is a one of the attack in which fraud URL creates replica of their own when user clicks on these URL and acquire resource. Sometime malicious URL directed towards a website which is a fraud website. These fraud websites are used to steal user’s confidential information. By using Bayesian classification identify the fraud URL on social networking sites and improve the security of social networking sites.",
"title": ""
},
{
"docid": "25296f69995ea3df5d96fbbbbe13bb2f",
"text": "The proliferation of commercial Web sites providing consumers with a new medium to purchase products and services has increased the importance of understanding the determinants of consumer intentions to shop online. This study compared the technology acceptance model and two variations of the theory of planned behavior to examine which model best helps to predict consumer intentions to shop online. Data were gathered from 297 Taiwanese customers of online bookstores, and structural equation modeling was used to compare the three models in terms of overall model fit, explanatory power and path significance. Decomposing the belief structures in the theory of planned behavior moderately increased explanatory power for behavioral intention. The results also indicate that the decomposed theory of planned behavior provides an improved method of predicting consumer intentions to shop online. Finally, the implications of this study are discussed. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6f609fef5fd93e776fd7d43ed91fd4a8",
"text": "Wandering is among the most frequent, problematic, and dangerous behaviors for elders with dementia. Frequent wanderers likely suffer falls and fractures, which affect the safety and quality of their lives. In order to monitor outdoor wandering of elderly people with dementia, this paper proposes a real-time method for wandering detection based on individuals' GPS traces. By representing wandering traces as loops, the problem of wandering detection is transformed into detecting loops in elders' mobility trajectories. Specifically, the raw GPS data is first preprocessed to remove noisy and crowded points by performing an online mean shift clustering. A novel method called θ_WD is then presented that is able to detect loop-like traces on the fly. The experimental results on the GPS datasets of several elders have show that the θ_WD method is effective and efficient in detecting wandering behaviors, in terms of detection performance (AUC > 0.99, and 90% detection rate with less than 5 % of the false alarm rate), as well as time complexity.",
"title": ""
},
{
"docid": "79407d805080797ed3a028f9e1419bba",
"text": "In recent years, video surveillance combined with computer vision algorithms like object detection, tracking or automated behaviour analysis has become an important research topic. However, most of these systems are depending on either fixed or remotely controlled narrow angle cameras. When using the former, the area of coverage is extremely limited, while utilizing the latter leads to high failure rates and troubles in camera calibration. In this paper, a method of extracting multiple perspective views from a single omnidirectional image for realtime environments is proposed. An example application of a ceiling-mounted camera setup is used to show the functional principle. Furthermore a performance improvement strategy is both presented and evaluated.",
"title": ""
},
{
"docid": "2fdf511e81080b5029f13801d5c6d783",
"text": "Content, usability, and aesthetics are core constructs in users’ perception and evaluation of websites, but little is known about their interplay in different use phases. In a first study web users (N=330) stated content as most relevant, followed by usability and aesthetics. In study 2 tests with four websites were performed (N=300), resulting data were modeled in path analyses. In this model aesthetics had the largest influence on first impressions, while all three constructs had an impact on first and overall impressions. However, only content contributed significantly to the intention to revisit or recommend a website. Using data from a third study (N=512, 42 websites), we were able to replicate this model. As before, perceived usability affected first and overall impressions, while content perception was important for all analyzed website use phases. In addition, aesthetics also had a small but significant impact on the participants’ intentions to revisit or recommend.",
"title": ""
},
{
"docid": "26052ad31f5ccf55398d6fe3b9850674",
"text": "An electroneurographic study performed on the peripheral nerves of 25 patients with severe cirrhosis following viral hepatitis showed slight slowing (P > 0.05) of motor conduction velocity (CV) and significant diminution (P < 0.001) of sensory CV and mixed sensorimotor-evoked potentials, associated with a significant decrease in the amplitude of sensory evoked potentials. The slowing was about equal in the distal (digital) and in the proximal segments of the same nerve. A mixed axonal degeneration and segmental demyelination is presumed to explain these findings. The CV measurements proved helpful for an early diagnosis of hepatic polyneuropathy showing subjective symptoms in the subclinical stage. Elektroneurographische Untersuchungen der peripheren Nerven bei 25 Patienten mit postviralen Leberzirrhosen ergaben folgendes: geringe Verminderung (P > 0.05) der motorischen Leitgeschwindigkeit (LG) und eine signifikant verlangsamte LG in sensiblen Fasern (P < 0.001), in beiden proximalen und distalen Fasern. Es wurde in den gemischten evozierten Potentialen eine Verlangsamung der LG festgestellt, zwischen den Werten der motorischen und sensiblen Fasern. Gleichzeitig wurde eine Minderung der Amplitude des NAP beobachtet. Diese Befunde sprechen für eine axonale Degeneration und eine Demyelinisierung in den meisten untersuchten peripheren Nerven. Elektroneurographische Untersuchungen erlaubten den funktionellen Zustand des peripheren Nervens abzuschätzen und bestimmte Veränderungen bereits im Initialstadium der Erkrankung aufzudecken, wenn der Patient noch keine klinischen Zeichen einer peripheren Neuropathie bietet.",
"title": ""
},
{
"docid": "f9be959b4c2392f7fc1dff2a1bde4dae",
"text": "This paper presents a new Web-based system, Mooshak, to handle programming contests. The system acts as a full contest manager as well as an automatic judge for programming contests. Mooshak innovates in a number of aspects: it has a scalable architecture that can be used from small single server contests to complex multi-site contests with simultaneous public online contests and redundancy; it has a robust data management system favoring simple procedures for storing, replicating, backing up data and failure recovery using persistent objects; it has automatic judging capabilities to assist human judges in the evaluation of programs; it has built-in safety measures to prevent users from interfering with the normal progress of contests. Mooshak is an open system implemented on the Linux operating system using the Apache HTTP server and the Tcl scripting language. This paper starts by describing the main features of the system and its architecture with reference to the automated judging, data management based on the replication of persistent objects over a network. Finally, we describe our experience using this system for managing two official programming contests. Copyright c © 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "8106487f98bcc94c1310799e74e7a173",
"text": "We present a method to predict long-term motion of pedestrians, modeling their behavior as jump-Markov processes with their goal a hidden variable. Assuming approximately rational behavior, and incorporating environmental constraints and biases, including time-varying ones imposed by traffic lights, we model intent as a policy in a Markov decision process framework. We infer pedestrian state using a Rao-Blackwellized filter, and intent by planning according to a stochastic policy, reflecting individual preferences in aiming at the same goal.",
"title": ""
}
] |
scidocsrr
|
f616c0706ac0074e8238c7f33fa8dcef
|
Trajectory Tracking Control for a 3-DOF Parallel Manipulator Using Fractional-Order $\hbox{PI}^{\lambda}\hbox{D}^{\mu}$ Control
|
[
{
"docid": "55b3fe6f2b93fd958d0857b485927bc9",
"text": "In this paper, in order to satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy during high-speed, high-acceleration tracking motions of a 3-degree-of-freedom (3-DOF) planar parallel manipulator, we propose a new control approach, termed convex synchronized (C-S) control. This control strategy is based on the so-called convex combination method, in which the synchronized control method is adopted. Through the adoption of a set of n synchronized controllers, each of which is tuned to satisfy at least one of a set of n closed-loop performance specifications, the resultant set of n closed-loop transfer functions are combined in a convex manner, from which a C-S controller is solved algebraically. Significantly, the resultant C-S controller simultaneously satisfies all n closed-loop performance specifications. Since each synchronized controller is only required to satisfy at least one of the n closed-loop performance specifications, the convex combination method is more efficient than trial-and-error methods, where the gains of a single controller are tuned to satisfy all n closed-loop performance specifications simultaneously. Furthermore, during the design of each synchronized controller, a feedback signal, termed the synchronization error, is employed. Different from the traditional tracking errors, this synchronization error represents the degree of coordination of the active joints in the parallel manipulator based on the manipulator kinematics. As a result, the trajectory tracking accuracy of each active joint and that of the manipulator end-effector is improved. Thus, possessing both the advantages of the convex combination method and synchronized control, the proposed C-S control method can satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy. In addition, unavoidable dynamic modeling errors are addressed through the introduction of a robust performance specification, which ensures that all performance specifications are satisfied despite allowable variations in dynamic parameters, or modeling errors. Experiments conducted on a 3-DOF P-R-R-type planar parallel manipulator demonstrate the aforementioned claims.",
"title": ""
}
] |
[
{
"docid": "eea9332a263b7e703a60c781766620e5",
"text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.",
"title": ""
},
{
"docid": "9a87f11fed489f58b0cdd15b329e5245",
"text": "BACKGROUND\nBracing is an effective strategy for scoliosis treatment, but there is no consensus on the best type of brace, nor on the way in which it should act on the spine to achieve good correction. The aim of this paper is to present the family of SPoRT (Symmetric, Patient-oriented, Rigid, Three-dimensional, active) braces: Sforzesco (the first introduced), Sibilla and Lapadula.\n\n\nMETHODS\nThe Sforzesco brace was developed following specific principles of correction. Due to its overall symmetry, the brace provides space over pathological depressions and pushes over elevations. Correction is reached through construction of the envelope, pushes, escapes, stops, and drivers. The real novelty is the drivers, introduced for the first time with the Sforzesco brace; they allow to achieve the main action of the brace: a three-dimensional elongation pushing the spine in a down-up direction.Brace prescription is made plane by plane: frontal (on the \"slopes\", another novelty of this concept, i.e. the laterally flexed sections of the spine), horizontal, and sagittal. The brace is built modelling the trunk shape obtained either by a plaster cast mould or by CAD-CAM construction. Brace checking is essential, since SPoRT braces are adjustable and customisable according to each individual curve pattern.Treatment time and duration is individually tailored (18-23 hours per day until Risser 3, then gradual reduction). SEAS (Scientific Exercises Approach to Scoliosis) exercises are a key factor to achieve success.\n\n\nRESULTS\nThe Sforzesco brace has shown to be more effective than the Lyon brace (matched case/control), equally effective as the Risser plaster cast (prospective cohort with retrospective controls), more effective than the Risser cast + Lyon brace in treating curves over 45 degrees Cobb (prospective cohort), and is able to improve aesthetic appearance (prospective cohort).\n\n\nCONCLUSIONS\nThe SPoRT concept of bracing (three-dimensional elongation pushing in a down-up direction) is different from the other corrective systems: 3-point, traction, postural, and movement-based. The Sforzesco brace, being comparable to casting, may be the best brace for the worst cases.",
"title": ""
},
{
"docid": "a96d6649a2274a919fbeb5b2221d69c6",
"text": "In this paper, a novel center frequency and bandwidth tunable, cross-coupled waveguide resonator filter is presented. The coupling between adjacent resonators can be adjusted using non-resonating coupling resonators. The negative sign for the cross coupling, which is required to generate transmission zeros, is enforced by choosing an appropriate resonant frequency for the cross-coupling resonator. The coupling iris design itself is identical regardless of the sign of the coupling. The design equations for the novel coupling elements are given in this paper. A four pole filter breadboard with two transmission zeros (elliptic filter function) has been built up and measured at various bandwidth and center frequency settings. It operates at Ka-band frequencies and can be tuned to bandwidths from 36 to 72 MHz in the frequency range 19.7-20.2 GHz.",
"title": ""
},
{
"docid": "c1ba049befffa94e358555056df15cc2",
"text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.",
"title": ""
},
{
"docid": "7ce147a433a376dd1cc0f7f09576e1bd",
"text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).",
"title": ""
},
{
"docid": "2b1048b3bdb52c006437b18d7b458871",
"text": "A road interpretation module is presented! which is part of a real-time vehicle guidance system for autonomous driving. Based on bifocal computer vision, the complete system is able to drive a vehicle on marked or unmarked roads, to detect obstacles, and to react appropriately. The hardware is a network of 23 transputers, organized in modular clusters. Parallel modules performing image analysis, feature extraction, object modelling, sensor data integration and vehicle control, are organized in hierarchical levels. The road interpretation module is based on the principle of recursive state estimation by Kalman filter techniques. Internal 4-D models of the road, vehicle position, and orientation are updated using data produced by the image-processing module. The system has been implemented on two vehicles (VITA and VaMoRs) and demonstrated in the framework of PROMETHEUS, where the ability of autonomous driving through narrow curves and of lane changing were demonstrated. Meanwhile, the system has been tested on public roads in real traffic situations, including travel on a German Autobahn autonomously at speeds up to 85 km/h. Belcastro, C.M., Fischl, R., and M. Kam. “Fusion Techniques Using Distributed Kalman Filtering for Detecting Changes in Systems.” Proceedings of the 1991 American Control Conference. 26-28 June 1991: Boston, MA. American Autom. Control Council, 1991. Vol. 3: (2296-2298).",
"title": ""
},
{
"docid": "181530396a384e0e8c8ed00bcd195e81",
"text": "Numerous problems encountered in real life cannot be actually formulated as a single objective problem; hence the requirement of Multi-Objective Optimization (MOO) had arisen several years ago. Due to the complexities in such type of problems powerful heuristic techniques were needed, which has been strongly satisfied by Swarm Intelligence (SI) techniques. Particle Swarm Optimization (PSO) has been established in 1995 and became a very mature and most popular domain in SI. MultiObjective PSO (MOPSO) established in 1999, has become an emerging field for solving MOOs with a large number of extensive literature, software, variants, codes and applications. This paper reviews all the applications of MOPSO in miscellaneous areas followed by the study on MOPSO variants in our next publication. An introduction to the key concepts in MOO is followed by the main body of review containing survey of existing work, organized by application area along with their multiple objectives, variants and further categorized variants.",
"title": ""
},
{
"docid": "479b124662755d8b07f2f5f9baabef9a",
"text": "The ARINC 653 specification defines the functionality that an operating system (OS) must guarantee to enforce robust spatial and temporal partitioning as well as an avionics application programming interface for the system. The standard application interface - the ARINC 653 application executive (APEX) - is defined as a set of software services a compliant OS must provide to avionics application developers. The ARINC 653 specification defines the interfaces and the behavior of the APEX but leaves implementation details to OS vendors. This paper describes an OS independent design approach of a portable APEX interface. POSIX, as a programming interface available on a wide range of modern OS, will be used to implement the APEX layer. This way the standardization of the APEX is taken a step further: not only the definition of services is standardized but also its interface to the underlying OS. Therefore, the APEX operation does not depend on a particular OS but relies on a well defined set of standardized components.",
"title": ""
},
{
"docid": "4a4a868d64a653fac864b5a7a531f404",
"text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.",
"title": ""
},
{
"docid": "12b8dac3e97181eb8ca9c0406f2fa456",
"text": "INTRODUCTION\nThis paper discusses some of the issues and challenges of implementing appropriate and coordinated District Health Management Information System (DHMIS) in environments dependent on external support especially when insufficient attention has been given to the sustainability of systems. It also discusses fundamental issues which affect the usability of DHMIS to support District Health System (DHS), including meeting user needs and user education in the use of information for management; and the need for integration of data from all health-providing and related organizations in the district.\n\n\nMETHODS\nThis descriptive cross-sectional study was carried out in three DHSs in Kenya. Data was collected through use of questionnaires, focus group discussions and review of relevant literature, reports and operational manuals of the studied DHMISs.\n\n\nRESULTS\nKey personnel at the DHS level were not involved in the development and implementation of the established systems. The DHMISs were fragmented to the extent that their information products were bypassing the very levels they were created to serve. None of the DHMISs was computerized. Key resources for DHMIS operation were inadequate. The adequacy of personnel was 47%, working space 40%, storage space 34%, stationery 20%, 73% of DHMIS staff were not trained, management support was 13%. Information produced was 30% accurate, 19% complete, 26% timely, 72% relevant; the level of confidentiality and use of information at the point of collection stood at 32% and 22% respectively and information security at 48%. Basic DHMIS equipment for information processing was not available. This inhibited effective and efficient provision of information services.\n\n\nCONCLUSIONS\nAn effective DHMIS is essential for DHS planning, implementation, monitoring and evaluation activities. Without accurate, timely, relevant and complete information the existing information systems are not capable of facilitating the DHS managers in their day-today operational management. The existing DHMISs were found not supportive of the DHS managers' strategic and operational management functions. Consequently DHMISs were found to be plagued by numerous designs, operational, resources and managerial problems. There is an urgent need to explore the possibilities of computerizing the existing manual systems to take advantage of the potential uses of microcomputers for DHMIS operations within the DHS. Information system designers must also address issues of cooperative partnership in information activities, systems compatibility and sustainability.",
"title": ""
},
{
"docid": "5a583fe6fae9f0624bcde5043c56c566",
"text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.",
"title": ""
},
{
"docid": "a321a7709188c741b34824c8b9084d47",
"text": "We offer a fluctuation smoothing computational approach for unsupervised automatic short answer grading (ASAG) techniques in the educational ecosystem. A major drawback of the existing techniques is the significant effect that variations in model answers could have on their performances. The proposed fluctuation smoothing approach, based on classical sequential pattern mining, exploits lexical overlap in students’ answers to any typical question. We empirically demonstrate using multiple datasets that the proposed approach improves the overall performance and significantly reduces (up to 63%) variation in performance (standard deviation) of unsupervised ASAG techniques. We bring in additional benchmarks such as (a) paraphrasing of model answers and (b) using answers by k top performing students as model answers, to amplify the benefits of the proposed approach.",
"title": ""
},
{
"docid": "93464384fa3c20cec1bfae7b4dc7a216",
"text": "Among the various solutions for the series association of high power IGBTs, the active clamping circuit insures both protection and voltage balancing, within good reliability and compactness. Therefore, this structure has been chosen to be integrated closed to the IGBTs. The design of this circuit leads to the resolution of a compromise between a good balancing and limited additional losses. The aim of this paper is to optimise this circuit, in order to reduce the losses, in the IGBTs as well as in the active clamping circuit. This design has been validated in a 3 kV 400 A test bench, using three 1.7 kV components in series.",
"title": ""
},
{
"docid": "7c9d35fb9cec2affbe451aed78541cef",
"text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.",
"title": ""
},
{
"docid": "dc8180cdc6344f1dc5bfa4dbf048912c",
"text": "Image analysis is a key area in the computer vision domain that has many applications. Genetic Programming (GP) has been successfully applied to this area extensively, with promising results. Highlevel features extracted from methods such as Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HoG) are commonly used for object detection with machine learning techniques. However, GP techniques are not often used with these methods, despite being applied extensively to image analysis problems. Combining the training process of GP with the powerful features extracted by SURF or HoG has the potential to improve the performance by generating high-level, domaintailored features. This paper proposes a new GP method that automatically detects di↵erent regions of an image, extracts HoG features from those regions, and simultaneously evolves a classifier for image classification. By extending an existing GP region selection approach to incorporate the HoG algorithm, we present a novel way of using high-level features with GP for image classification. The ability of GP to explore a large search space in an e cient manner allows all stages of the new method to be optimised simultaneously, unlike in existing approaches. The new approach is applied across a range of datasets, with promising results when compared to a variety of well-known machine learning techniques. Some high-performing GP individuals are analysed to give insight into how GP can e↵ectively be used with high-level features for image classification.",
"title": ""
},
{
"docid": "78786193b4f7521b05f43997218f6778",
"text": "The design and fabrication of an Ultra broadband square quad-ridge polarizer is discussed here. The principal advantages of this topology relay on both the instantaneous bandwidth and the axial ratio improvement. Experimental measurements exhibit very good agreement with the predicted results given by Mode Matching techniques. The structure provides an extremely flat axial ratio (AR< 0.4dB) and good return losses >25dB at both square ports over the extended Ku band (= 60%). Moreover, yield analysis and scaling properties demonstrate the robustness of this design against fabrication tolerances.",
"title": ""
},
{
"docid": "081faf749f5e996c70f91a77ecae2a88",
"text": "Hyponatremia associated with diuretic use can be clinically difficult to differentiate from the syndrome of inappropriate antidiuretic hormone secretion (SIADH). We report a case of a 28-year-old man with HIV (human immunodeficiency virus) and Pneumocystis pneumonia who developed hyponatremia while receiving trimethoprim-sulfamethoxazole (TMP/SMX). Serum sodium level on admission was 135 mEq/L (with a history of hyponatremia) and decreased to 117 mEq/L by day 7 of TMP/SMX treatment. In the setting of suspected euvolemia and Pneumocystis pneumonia, he was treated initially for SIADH with fluid restriction and tolvaptan without improvement in serum sodium level. A diagnosis of hyponatremia secondary to the diuretic effect of TMP subsequently was confirmed, with clinical hypovolemia and high renin, aldosterone, and urinary sodium levels. Subsequent therapy with sodium chloride stabilized serum sodium levels in the 126- to 129-mEq/L range. After discontinuation of TMP/SMX treatment, serum sodium, renin, and aldosterone levels normalized. TMP/SMX-related hyponatremia likely is underdiagnosed and often mistaken for SIADH. It should be considered for patients on high-dose TMP/SMX treatment and can be differentiated from SIADH by clinical hypovolemia (confirmed by high renin and aldosterone levels). TMP-associated hyponatremia can be treated with sodium supplementation to offset ongoing urinary losses if the TMP/SMX therapy cannot be discontinued. In this Acid-Base and Electrolyte Teaching Case, a less common cause of hyponatremia is presented, and a stepwise approach to the diagnosis is illustrated.",
"title": ""
},
{
"docid": "6384c31adaf8b28ca7a6dd97d3eb571a",
"text": ".....................................................................................................3 Introduction...................................................................................................4 Chapter 1. History of Origami............................................................................. 5 Chapter 2. Evolution of Origami tessellations in 20-th century architecture........................7 Chapter 3. Kinetic system and Origami...................................................................9 3.1. Kinetic system................................................................................. 9 3.2. Geometric Origami............................................................................ 9 Chapter 4. Folding patterns................................................................................ 10 4.1. Yoshimura pattern (diamond pattern)........................................................ 11 4.2. Diagonal pattern..............................................................................11 4.3. Miura Ori pattern (herringbone pattern)...................................................11 Chapter 5. The origami house and impact on the furniture design.................................... 13 Conclusion.................................................................................................... 16 References...................................................................................................17 Annex 1....................................................................................................... 18 Annex 2...................................................................................................... 19",
"title": ""
},
{
"docid": "03dc5f33c4735680902c3cd190a07962",
"text": "Natural systems from snowflakes to mollusc shells show a great diversity of complex patterns. The origins of such complexity can be investigated through mathematical models termed ‘cellular automata’. Cellular automata consist of many identical components, each simple., but together capable of complex behaviour. They are analysed both as discrete dynamical systems, and as information-processing systems. Here some of their universal features are discussed, and some general principles are suggested.",
"title": ""
},
{
"docid": "293e2cd2647740bb65849fed003eb4ac",
"text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.",
"title": ""
}
] |
scidocsrr
|
80d62639a51a73b85da658416fa9a31a
|
Noncooperative Differential Games . A Tutorial
|
[
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
}
] |
[
{
"docid": "541ebcc2e081ea1a08bbaba2e9820510",
"text": "We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.",
"title": ""
},
{
"docid": "3baf0d5b71f44f5a1cbbd5d81ce7a15f",
"text": "We present a new approach to facilitate the application of the optimal transport metric to pattern recognition on image databases. The method is based on a linearized version of the optimal transport metric, which provides a linear embedding for the images. Hence, it enables shape and appearance modeling using linear geometric analysis techniques in the embedded space. In contrast to previous work, we use Monge's formulation of the optimal transport problem, which allows for reasonably fast computation of the linearized optimal transport embedding for large images. We demonstrate the application of the method to recover and visualize meaningful variations in a supervised-learning setting on several image datasets, including chromatin distribution in the nuclei of cells, galaxy morphologies, facial expressions, and bird species identification. We show that the new approach allows for high-resolution construction of modes of variations and discrimination and can enhance classification accuracy in a variety of image discrimination problems.",
"title": ""
},
{
"docid": "c1d95246f5d1b8c67f4ff4769bb6b9ce",
"text": "BACKGROUND\nA previous open-label study of melatonin, a key substance in the circadian system, has shown effects on migraine that warrant a placebo-controlled study.\n\n\nMETHOD\nA randomized, double-blind, placebo-controlled crossover study was carried out in 2 centers. Men and women, aged 18-65 years, with migraine but otherwise healthy, experiencing 2-7 attacks per month, were recruited from the general population. After a 4-week run-in phase, 48 subjects were randomized to receive either placebo or extended-release melatonin (Circadin®, Neurim Pharmaceuticals Ltd., Tel Aviv, Israel) at a dose of 2 mg 1 hour before bedtime for 8 weeks. After a 6-week washout treatment was switched. The primary outcome was migraine attack frequency (AF). A secondary endpoint was sleep quality assessed by the Pittsburgh Sleep Quality Index (PSQI).\n\n\nRESULTS\nForty-six subjects completed the study (96%). During the run-in phase, the average AF was 4.2 (±1.2) per month and during melatonin treatment the AF was 2.8 (±1.6). However, the reduction in AF during placebo was almost equal (p = 0.497). Absolute risk reduction was 3% (95% confidence interval -15 to 21, number needed to treat = 33). A highly significant time effect was found. The mean global PSQI score did not improve during treatment (p = 0.09).\n\n\nCONCLUSION\nThis study provides Class I evidence that prolonged-release melatonin (2 mg 1 hour before bedtime) does not provide any significant effect over placebo as migraine prophylaxis.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class I evidence that 2 mg of prolonged release melatonin given 1 hour before bedtime for a duration of 8 weeks did not result in a reduction in migraine frequency compared with placebo (p = 0.497).",
"title": ""
},
{
"docid": "c35306b0ec722364308d332664c823f8",
"text": "The uniform asymmetrical microstrip parallel coupled line is used to design the multi-section unequal Wilkinson power divider with high dividing ratio. The main objective of the paper is to increase the trace widths in order to facilitate the construction of the power divider with the conventional photolithography method. The separated microstrip lines in the conventional Wilkinson power divider are replaced with the uniform asymmetrical parallel coupled lines. An even-odd mode analysis is used to calculate characteristic impedances and then the per-unit-length capacitance and inductance parameter matrix are used to calculate the physical dimension of the power divider. To clarify the advantages of this method, two three-section Wilkinson power divider with an unequal power-division ratio of 1 : 2.5 are designed and fabricated and measured, one in the proposed configuration and the other in the conventional configuration. The simulation and the measurement results show that not only the specified design goals are achieved, but also all the microstrip traces can be easily implemented in the proposed power divider.",
"title": ""
},
{
"docid": "1189c3648c2cce0c716ec7c0eca214d7",
"text": "This article considers the application of variational Bayesian methods to joint recursive estimation of the dynamic state and the time-varying measurement noise parameters in linear state space models. The proposed adaptive Kalman filtering method is based on forming a separable variational approximation to the joint posterior distribution of states and noise parameters on each time step separately. The result is a recursive algorithm, where on each step the state is estimated with Kalman filter and the sufficient statistics of the noise variances are estimated with a fixed-point iteration. The performance of the algorithm is demonstrated with simulated data.",
"title": ""
},
{
"docid": "e31a8952baf5b3099947d6b076f71dad",
"text": "Infrared images typically contain obvious strip noise. It is a challenging task to eliminate such noise without blurring fine image details in low-textured infrared images. In this paper, we introduce an effective single-image-based algorithm to accurately remove strip-type noise present in infrared images without causing blurring effects. First, a 1-D row guided filter is applied to perform edge-preserving image smoothing in the horizontal direction. The extracted high-frequency image part contains both strip noise and a significant amount of image details. Through a thermal calibration experiment, we discover that a local linear relationship exists between infrared data and strip noise of pixels within a column. Based on the derived strip noise behavioral model, strip noise components are accurately decomposed from the extracted high-frequency signals by applying a 1-D column guided filter. Finally, the estimated noise terms are subtracted from the raw infrared images to remove strips without blurring image details. The performance of the proposed technique is thoroughly investigated and is compared with the state-of-the-art 1-D and 2-D denoising algorithms using captured infrared images.",
"title": ""
},
{
"docid": "1b52822b76e7ace1f7e12a6f2c92b060",
"text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.",
"title": ""
},
{
"docid": "317860ca39eb412033a6c5e636285487",
"text": "While biometric systems aren't foolproof, the research community has made significant strides to identify vulnerabilities and develop measures to counter them.",
"title": ""
},
{
"docid": "472fa4ac09577955b2bc7f0674c37dfe",
"text": "BACKGROUND\n47 XXY/46 XX mosaicism with characteristics suggesting Klinefelter syndrome is very rare and at present, only seven cases have been reported in the literature.\n\n\nCASE PRESENTATION\nWe report an Indian boy diagnosed as variant of Klinefelter syndrome with 47 XXY/46 XX mosaicism at age 12 years. He was noted to have right cryptorchidism and chordae at birth, but did not have surgery for these until age 3 years. During surgery, the right gonad was atrophic and removed. Histology revealed atrophic ovarian tissue. Pelvic ultrasound showed no Mullerian structures. There was however no clinical follow up and he was raised as a boy. At 12 years old he was re-evaluated because of parental concern about his 'female' body habitus. He was slightly overweight, had eunuchoid body habitus with mild gynaecomastia. The right scrotal sac was empty and a 2mls testis was present in the left scrotum. Penile length was 5.2 cm and width 2.0 cm. There was absent pubic or axillary hair. Pronation and supination of his upper limbs were reduced and x-ray of both elbow joints revealed bilateral radioulnar synostosis. The baseline laboratory data were LH < 0.1 mIU/ml, FSH 1.4 mIU/ml, testosterone 0.6 nmol/L with raised estradiol, 96 pmol/L. HCG stimulation test showed poor Leydig cell response. The karyotype based on 76 cells was 47 XXY[9]/46 XX[67] with SRY positive. Laparoscopic examination revealed no Mullerian structures.\n\n\nCONCLUSION\nInsisting on an adequate number of cells (at least 50) to be examined during karyotyping is important so as not to miss diagnosing mosaicism.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "58e176bb818efed6de7224d7088f2487",
"text": "In the context of marketing, attribution is the process of quantifying the value of marketing activities relative to the final outcome. It is a topic rapidly growing in importance as acknowledged by the industry. However, despite numerous tools and techniques designed for its measurement, the absence of a comprehensive assessment and classification scheme persists. Thus, we aim to bridge this gap by providing an academic review to accumulate and comprehend current knowledge in attribution modeling, leading to a road map to guide future research, expediting new knowledge creation.",
"title": ""
},
{
"docid": "12f5447d9e83890c3e953e03a2e92c8f",
"text": "BACKGROUND\nLong-term continuous systolic blood pressure (SBP) and heart rate (HR) monitors are of tremendous value to medical (cardiovascular, circulatory and cerebrovascular management), wellness (emotional and stress tracking) and fitness (performance monitoring) applications, but face several major impediments, such as poor wearability, lack of widely accepted robust SBP models and insufficient proofing of the generalization ability of calibrated models.\n\n\nMETHODS\nThis paper proposes a wearable cuff-less electrocardiography (ECG) and photoplethysmogram (PPG)-based SBP and HR monitoring system and many efforts are made focusing on above challenges. Firstly, both ECG/PPG sensors are integrated into a single-arm band to provide a super wearability. A highly convenient but challenging single-lead configuration is proposed for weak single-arm-ECG acquisition, instead of placing the electrodes on the chest, or two wrists. Secondly, to identify heartbeats and estimate HR from the motion artifacts-sensitive weak arm-ECG, a machine learning-enabled framework is applied. Then ECG-PPG heartbeat pairs are determined for pulse transit time (PTT) measurement. Thirdly, a PTT&HR-SBP model is applied for SBP estimation, which is also compared with many PTT-SBP models to demonstrate the necessity to introduce HR information in model establishment. Fourthly, the fitted SBP models are further evaluated on the unseen data to illustrate the generalization ability. A customized hardware prototype was established and a dataset collected from ten volunteers was acquired to evaluate the proof-of-concept system.\n\n\nRESULTS\nThe semi-customized prototype successfully acquired from the left upper arm the PPG signal, and the weak ECG signal, the amplitude of which is only around 10% of that of the chest-ECG. The HR estimation has a mean absolute error (MAE) and a root mean square error (RMSE) of only 0.21 and 1.20 beats per min, respectively. Through the comparative analysis, the PTT&HR-SBP models significantly outperform the PTT-SBP models. The testing performance is 1.63 ± 4.44, 3.68, 4.71 mmHg in terms of mean error ± standard deviation, MAE and RMSE, respectively, indicating a good generalization ability on the unseen fresh data.\n\n\nCONCLUSIONS\nThe proposed proof-of-concept system is highly wearable, and its robustness is thoroughly evaluated on different modeling strategies and also the unseen data, which are expected to contribute to long-term pervasive hypertension, heart health and fitness management.",
"title": ""
},
{
"docid": "30e80cceb7e63f89c6ab0cd20988bedb",
"text": "This work is focused on the development of a new management system for building and home automation that provides a fully real time monitor of household appliances and home environmental parameters. The developed system consists of a smart sensing unit, wireless sensors and actuators and a Web-based interface for remote and mobile applications. The main advantages of the proposed solution rely on the reliability of the developed algorithmics, on modularity and open-system characteristics, on low power consumption and system cost efficiency.",
"title": ""
},
{
"docid": "98d8822a658dc7ecdfb7cb824c73e7a5",
"text": "We address the problem of generating query suggestions to support users in completing their underlying tasks (which motivated them to search in the first place). Given an initial query, these query suggestions should provide a coverage of possible subtasks the user might be looking for. We propose a probabilistic modeling framework that obtains keyphrases from multiple sources and generates query suggestions from these keyphrases. Using the test suites of the TREC Tasks track, we evaluate and analyze each component of our model.",
"title": ""
},
{
"docid": "d079bba6c4490bf00eb73541ebba8ace",
"text": "The literature on Design Science (or Design Research) has been mixed on the inclusion, form, and role of theory and theorising in Design Science. Some authors have explicitly excluded theory development and testing from Design Science, leaving them to the Natural and Social/Behavioural Sciences. Others propose including theory development and testing as part of Design Science. Others propose some ideas for the content of IS Design Theories, although more detailed and clear concepts would be helpful. This paper discusses the need and role for theory in Design Science. It further proposes some ideas for standards for the form and level of detail needed for theories in Design Science. Finally it develops a framework of activities for the interaction of Design Science with research in other scientific paradigms.",
"title": ""
},
{
"docid": "e472a8e75ddf72549aeb255aa3d6fb79",
"text": "In the presence of normal sensory and motor capacity, intelligent behavior is widely acknowledged to develop from the interaction of short-and long-term memory. While the behavioral, cellular, and molecular underpinnings of the long-term memory process have long been associated with the hippocampal formation, and this structure has become a major model system for the study of memory, the neural substrates of specific short-term memory functions have more and more become identified with prefrontal cortical areas (Goldman-Rakic, 1987; Fuster, 1989). The special nature of working memory was first identified in studies of human cognition and modern neuro-biological methods have identified a specific population of neurons, patterns of their intrinsic and extrinsic circuitry, and signaling molecules that are engaged in this process in animals. In this article, I will first define key features of working memory and then descdbe its biological basis in primates. Distinctive Features of a Working Memory System Working memory is the term applied to the type of memory that is active and relevant only for a short period of time, usually on the scale of seconds. A common example of working memory is keeping in mind a newly read phone number until it is dialed and then immediately forgotten. This process has been captu red by the analogy to a mental sketch pad (Baddeley, 1986) an~l is clearly different from the permanent inscription on neuronal circuitry due to learning. The criterion-useful or relevant only transiently distinguishes working memory from the processes that have been variously termed semantic (Tulving, 1972) or procedural (Squire and Cohen, 1984) memory, processes that can be considered associative in the traditional sense, i.e., information acquired by the repeated contiguity between stimuli and responses and/or consequences. If semantic and procedural memory are the processes by which stimuli and events acquire archival permanence , working memory is the process for the retrieval and proper utilization of this acquired knowledge. In this context, the contents of working memory are as much on the output side of long-term storage sites as they are an important source of input to those sites. Considerable evidence is now at hand to demonstrate that the brain obeys the distinction between working and other forms of memory , and that the prefrontal cortex has a preeminent role mainly in the former (Goldman.Rakic, 1987). However, memory-guided behavior obviously reflects the operation of a widely distributed system of brain structures and psychological functions, and understanding …",
"title": ""
},
{
"docid": "ce36cc78b512a2aafee8308a3f0ebd12",
"text": "BACKGROUND\nThe optimal ways of using aromatase inhibitors or tamoxifen as endocrine treatment for early breast cancer remains uncertain.\n\n\nMETHODS\nWe undertook meta-analyses of individual data on 31,920 postmenopausal women with oestrogen-receptor-positive early breast cancer in the randomised trials of 5 years of aromatase inhibitor versus 5 years of tamoxifen; of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5; and of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen. Primary outcomes were any recurrence of breast cancer, breast cancer mortality, death without recurrence, and all-cause mortality. Intention-to-treat log-rank analyses, stratified by age, nodal status, and trial, yielded aromatase inhibitor versus tamoxifen first-event rate ratios (RRs).\n\n\nFINDINGS\nIn the comparison of 5 years of aromatase inhibitor versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·64, 95% CI 0·52-0·78) and 2-4 (RR 0·80, 0·68-0·93), and non-significantly thereafter. 10-year breast cancer mortality was lower with aromatase inhibitors than tamoxifen (12·1% vs 14·2%; RR 0·85, 0·75-0·96; 2p=0·009). In the comparison of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·74, 0·62-0·89) but not while both groups received aromatase inhibitors during years 2-4, or thereafter; overall in these trials, there were fewer recurrences with 5 years of aromatase inhibitors than with tamoxifen then aromatase inhibitors (RR 0·90, 0·81-0·99; 2p=0·045), though the breast cancer mortality reduction was not significant (RR 0·89, 0·78-1·03; 2p=0·11). In the comparison of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 2-4 (RR 0·56, 0·46-0·67) but not subsequently, and 10-year breast cancer mortality was lower with switching to aromatase inhibitors than with remaining on tamoxifen (8·7% vs 10·1%; 2p=0·015). Aggregating all three types of comparison, recurrence RRs favoured aromatase inhibitors during periods when treatments differed (RR 0·70, 0·64-0·77), but not significantly thereafter (RR 0·93, 0·86-1·01; 2p=0·08). Breast cancer mortality was reduced both while treatments differed (RR 0·79, 0·67-0·92), and subsequently (RR 0·89, 0·81-0·99), and for all periods combined (RR 0·86, 0·80-0·94; 2p=0·0005). All-cause mortality was also reduced (RR 0·88, 0·82-0·94; 2p=0·0003). RRs differed little by age, body-mass index, stage, grade, progesterone receptor status, or HER2 status. There were fewer endometrial cancers with aromatase inhibitors than tamoxifen (10-year incidence 0·4% vs 1·2%; RR 0·33, 0·21-0·51) but more bone fractures (5-year risk 8·2% vs 5·5%; RR 1·42, 1·28-1·57); non-breast-cancer mortality was similar.\n\n\nINTERPRETATION\nAromatase inhibitors reduce recurrence rates by about 30% (proportionately) compared with tamoxifen while treatments differ, but not thereafter. 5 years of an aromatase inhibitor reduces 10-year breast cancer mortality rates by about 15% compared with 5 years of tamoxifen, hence by about 40% (proportionately) compared with no endocrine treatment.\n\n\nFUNDING\nCancer Research UK, Medical Research Council.",
"title": ""
},
{
"docid": "4e621b825deb27115cc9b98bec849b34",
"text": "veryone who ever taught project management seems to have a favorite disaster story, whether it's the new Denver airport baggage handling system, the London Stock Exchange, or the French Railways. Many of us would point to deficiencies in the software engineering activities that seemingly guarantee failure. It is indeed important that we understand how to engineer systems well, but we must also consider a wider viewpoint: success requires much more than good engineering. We must understand why we are engineering anything at all, and what the investment in money, time, and energy is all about. Who wants the software? In what context will it be applied? Who is paying for it and what do they hope to get from it? This special focus of IEEE Software takes that wider viewpoint and examines different views of how we can achieve success in software projects. The arguments and projects described here are not solely concerned with software but with the otherdeliverables that typically make for a required outcome: fully trained people; a new operational model for a business; the realization of an organizational strategy that depends on new information systems. What critical factors played a role in your last project success? Join us on this intellectual journey from soft ware engineering to business strategy.",
"title": ""
},
{
"docid": "1faaf86a7f43f6921d8c754fbc9ea0e1",
"text": "Department of Mechanical Engineering, Politécnica/COPPE, Federal University of Rio de Janeiro, UFRJ, Cid. Universitaria, Cx. Postal: 68503, Rio de Janeiro, RJ, 21941-972, Brazil, helcio@mecanica.ufrj.br, colaco@ufrj.br, wellingtonuff@yahoo.com.br, hmassardf@gmail.com Department of Mechanical and Materials Engineering, Florida International University, 10555 West Flagler Street, EC 3462, Miami, Florida 33174, U.S.A., dulikrav@fiu.edu Department of Subsea Technology, Petrobras Research and Development Center – CENPES, Av. Horácio Macedo, 950, Cidade Universitária, Ilha do Fundão, 21941-915, Rio de Janeiro, RJ, Brazil, fvianna@petrobras.com.br Université de Toulouse ; Mines Albi ; CNRS; Centre RAPSODEE, Campus Jarlard, F-81013 Albi cedex 09, France, olivier.fudym@enstimac.fr",
"title": ""
},
{
"docid": "13685fa8e74d57d05d5bce5b1d3d4c93",
"text": "Children left behind by parents who are overseas Filipino workers (OFW) benefit from parental migration because their financial status improves. However, OFW families might emphasize the economic benefits to compensate for their separation, which might lead to materialism among children left behind. Previous research indicates that materialism is associated with lower well-being. The theory is that materialism focuses attention on comparing one's possessions to others, making one constantly dissatisfied and wanting more. Research also suggests that gratitude mediates this link, with the focus on acquiring more possessions that make one less grateful for current possessions. This study explores the links between materialism, gratitude, and well-being among 129 adolescent children of OFWs. The participants completed measures of materialism, gratitude, and well-being (life satisfaction, self-esteem, positive and negative affect). Results showed that gratitude mediated the negative relationship between materialism and well-being (and its positive relationship with negative affect). Children of OFWs who have strong materialist orientation seek well-being from possessions they do not have and might find it difficult to be grateful of their situation, contributing to lower well-being. The findings provide further evidence for the mediated relationship between materialism and well-being in a population that has not been previously studied in the related literature. The findings also point to two possible targets for psychosocial interventions for families and children of OFWs.",
"title": ""
}
] |
scidocsrr
|
d7b060714639c5b7368add8acb3857d8
|
Javelin: Internet-based Parallel Computing using Java
|
[
{
"docid": "8affe5fdccc31b9b82b3173217715457",
"text": "The PVM system is a programming environment for the development and execution of large concurrent or parallel applications that consist of many interacting, but relatively independent, components. It is intended to operate on a collection of heterogeneous computing elements interconnected by one or more networks. The participating processors may be scalar machines, multiprocessors, or special-purpose computers, enabling application components to execute on the architecture most appropriate to the algorithm. PVM provides a straightforward and general interface that permits the description of various types of algorithms (and their interactions), while the underlying infrastructure permits the execution of applications on a virtual computing environment that supports multiple parallel computation models. PVM contains facilities for concurrent, sequential, or conditional execution of application components, is portable to a variety of architectures, and supports certain forms of error detection and recovery.",
"title": ""
}
] |
[
{
"docid": "460a296de1bd13378d71ce19ca5d807a",
"text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].",
"title": ""
},
{
"docid": "f845508acabb985dd80c31774776e86b",
"text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.",
"title": ""
},
{
"docid": "71e786ccfc57ad62e90dd4a7b85cbedd",
"text": "Studies addressing behavioral functions of dopamine (DA) in the nucleus accumbens septi (NAS) are reviewed. A role of NAS DA in reward has long been suggested. However, some investigators have questioned the role of NAS DA in rewarding effects because of its role in aversive contexts. As findings supporting the role of NAS DA in mediating aversively motivated behaviors accumulate, it is necessary to accommodate such data for understanding the role of NAS DA in behavior. The aim of the present paper is to provide a unifying interpretation that can account for the functions of NAS DA in a variety of behavioral contexts: (1) its role in appetitive behavioral arousal, (2) its role as a facilitator as well as an inducer of reward processes, and (3) its presently undefined role in aversive contexts. The present analysis suggests that NAS DA plays an important role in sensorimotor integrations that facilitate flexible approach responses. Flexible approach responses are contrasted with fixed instrumental approach responses (habits), which may involve the nigro-striatal DA system more than the meso-accumbens DA system. Functional properties of NAS DA transmission are considered in two stages: unconditioned behavioral invigoration effects and incentive learning effects. (1) When organisms are presented with salient stimuli (e.g., novel stimuli and incentive stimuli), NAS DA is released and invigorates flexible approach responses (invigoration effects). (2) When proximal exteroceptive receptors are stimulated by unconditioned stimuli, NAS DA is released and enables stimulus representations to acquire incentive properties within specific environmental context. It is important to make a distinction that NAS DA is a critical component for the conditional formation of incentive representations but not the retrieval of incentive stimuli or behavioral expressions based on over-learned incentive responses (i.e., habits). Nor is NAS DA essential for the cognitive perception of environmental stimuli. Therefore, even without normal NAS DA transmission, the habit response system still allows animals to perform instrumental responses given that the tasks take place in fixed environment. Such a role of NAS DA as an incentive-property constructor is not limited to appetitive contexts but also aversive contexts. This dual action of NAS DA in invigoration and incentive learning may explain the rewarding effects of NAS DA as well as other effects of NAS DA in a variety of contexts including avoidance and unconditioned/conditioned increases in open-field locomotor activity. Particularly, the present hypothesis offers the following interpretation for the finding that both conditioned and unconditioned aversive stimuli stimulate DA release in the NAS: NAS DA invigorates approach responses toward 'safety'. Moreover, NAS DA modulates incentive properties of the environment so that organisms emit approach responses toward 'safety' (i.e., avoidance responses) when animals later encounter similar environmental contexts. There may be no obligatory relationship between NAS DA release and positive subjective effects, even though these systems probably interact with other brain systems which can mediate such effects. The present conceptual framework may be valuable in understanding the dynamic interplay of NAS DA neurochemistry and behavior, both normal and pathophysiological.",
"title": ""
},
{
"docid": "6300f94dbfa58583e15741e5c86aa372",
"text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.",
"title": ""
},
{
"docid": "8fcc8c61dd99281cfda27bbad4b7623a",
"text": "Modern data centers are massive, and support a range of distributed applications across potentially hundreds of server racks. As their utilization and bandwidth needs continue to grow, traditional methods of augmenting bandwidth have proven complex and costly in time and resources. Recent measurements show that data center traffic is often limited by congestion loss caused by short traffic bursts. Thus an attractive alternative to adding physical bandwidth is to augment wired links with wireless links in the 60 GHz band.\n We address two limitations with current 60 GHz wireless proposals. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. We propose and evaluate a new wireless primitive for data centers, 3D beamforming, where 60 GHz signals bounce off data center ceilings, thus establishing indirect line-of-sight between any two racks in a data center. We build a small 3D beamforming testbed to demonstrate its ability to address both link blockage and link interference, thus improving link range and number of concurrent transmissions in the data center. In addition, we propose a simple link scheduler and use traffic simulations to show that these 3D links significantly expand wireless capacity compared to their 2D counterparts.",
"title": ""
},
{
"docid": "09fa74b0a83e040beb5612e6eeb4089c",
"text": "Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.",
"title": ""
},
{
"docid": "eb350f3e61333f7bcd6f9bade1151f4b",
"text": "This Internet research project examined the relationship between consumption of muscle and fitness magazines and/or various indices of pornography and body satisfaction in gay and heterosexual men. Participants (N = 101) were asked to complete body satisfaction questionnaires that addressed maladaptive eating attitudes, the drive for muscularity, and social physique anxiety. Participants also completed scales measuring self-esteem, depression, and socially desirable responding. Finally, respondents were asked about their consumption of muscle and fitness magazines and pornography. Results indicated that viewing and purchasing of muscle and fitness magazines correlated positively with levels of body dissatisfaction for both gay and heterosexual men. Pornography exposure was positively correlated with social physique anxiety for gay men. The limitations of this study and directions for future research are outlined.",
"title": ""
},
{
"docid": "6e1013e84468c3809742bbe826598f21",
"text": "Many-light rendering methods replace multi-bounce light transport with direct lighting from many virtual point light sources to allow for simple and efficient computation of global illumination. Lightcuts build a hierarchy over virtual lights, so that surface points can be shaded with a sublinear number of lights while minimizing error. However, the original algorithm needs to run on every shading point of the rendered image. It is well known that the performance of Lightcuts can be improved by exploiting the coherence between individual cuts. We propose a novel approach where we invest into the initial lightcut creation at representative cache records, and then directly interpolate the input lightcuts themselves as well as per-cluster visibility for neighboring shading points. This allows us to improve upon the performance of the original Lightcuts algorithm by a factor of 4−8 compared to an optimized GPU-implementation of Lightcuts, while introducing only a small additional approximation error. The GPU-implementation of our technique enables us to create previews of Lightcuts-based global illumination renderings.",
"title": ""
},
{
"docid": "8e7c2943eb6df575bf847cd67b6424dc",
"text": "Today, money laundering poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché, of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting money laundering activities. Within the scope of a collaboration project for the purpose of developing a new solution for the anti-money laundering Units in an international investment bank, we proposed a simple and efficient data mining-based solution for anti-money laundering. In this paper, we present this solution developed as a tool and show some preliminary experiment results with real transaction datasets.",
"title": ""
},
{
"docid": "b6ee2327d8e7de5ede72540a378e69a0",
"text": "Heads of Government from Asia and the Pacific have committed to a malaria-free region by 2030. In 2015, the total number of confirmed cases reported to the World Health Organization by 22 Asia Pacific countries was 2,461,025. However, this was likely a gross underestimate due in part to incidence data not being available from the wide variety of known sources. There is a recognized need for an accurate picture of malaria over time and space to support the goal of elimination. A survey was conducted to gain a deeper understanding of the collection of malaria incidence data for surveillance by National Malaria Control Programmes in 22 countries identified by the Asia Pacific Leaders Malaria Alliance. In 2015–2016, a short questionnaire on malaria surveillance was distributed to 22 country National Malaria Control Programmes (NMCP) in the Asia Pacific. It collected country-specific information about the extent of inclusion of the range of possible sources of malaria incidence data and the role of the private sector in malaria treatment. The findings were used to produce recommendations for the regional heads of government on improving malaria surveillance to inform regional efforts towards malaria elimination. A survey response was received from all 22 target countries. Most of the malaria incidence data collected by NMCPs originated from government health facilities, while many did not collect comprehensive data from mobile and migrant populations, the private sector or the military. All data from village health workers were included by 10/20 countries and some by 5/20. Other sources of data included by some countries were plantations, police and other security forces, sentinel surveillance sites, research or academic institutions, private laboratories and other government ministries. Malaria was treated in private health facilities in 19/21 countries, while anti-malarials were available in private pharmacies in 16/21 and private shops in 6/21. Most countries use primarily paper-based reporting. Most collected malaria incidence data in the Asia Pacific is from government health facilities while data from a wide variety of other known sources are often not included in national surveillance databases. In particular, there needs to be a concerted regional effort to support inclusion of data on mobile and migrant populations and the private sector. There should also be an emphasis on electronic reporting and data harmonization across organizations. This will provide a more accurate and up to date picture of the true burden and distribution of malaria and will be of great assistance in helping realize the goal of malaria elimination in the Asia Pacific by 2030.",
"title": ""
},
{
"docid": "f39866785eb7c140ba421233a410ad0a",
"text": "Lane detection is a fundamental aspect of most current advanced driver assistance systems U+0028 ADASs U+0029. A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed.",
"title": ""
},
{
"docid": "d84477c849e1ff45a405d38f9d5662f2",
"text": "We analyze the localized setting of learning kernels also known as localized multiple kernel learning. This problem has been addressed in the past using rather heuristic approaches based on approximately optimizing non-convex problem formulations, of which up to now no theoretical learning bounds are known. In this paper, we show generalization error bounds for learning localized kernel classes where the localities are coupled using graph-based regularization. We propose a novel learning localized kernels algorithm based on this hypothesis class that is formulated as a convex optimization problem using a pre-obtained cluster structure of the data. We derive dual representations using Fenchel conjugation theory, based on which we give a simple yet efficient wrapper-based optimization algorithm. We apply the method to problems involving multiple heterogeneous data sources, taken from domains of computational biology and computer vision. The results show that the proposed convex approach to learning localized kernels can achieve higher prediction accuracies than its global and non-convex local counterparts.",
"title": ""
},
{
"docid": "f8f00576f55e24a06b6c930c0cc39a85",
"text": "An integrated navigation information system must know continuously the current position with a good precision. The required performance of the positioning module is achieved by using a cluster of heterogeneous sensors whose measurements are fused. The most popular data fusion method for positioning problems is the extended Kalman filter. The extended Kalman filter is a variation of the Kalman filter used to solve non-linear problems. Recently, an improvement to the extended Kalman filter has been proposed, the unscented Kalman filter. This paper describes an empirical analysis evaluating the performances of the unscented Kalman filter and comparing them with the extended Kalman filter's performances.",
"title": ""
},
{
"docid": "39fc05dfc0faeb47728b31b6053c040a",
"text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.",
"title": ""
},
{
"docid": "353d6ed75f2a4bca5befb5fdbcea2bcc",
"text": "BACKGROUND\nThe number of mental health apps (MHapps) developed and now available to smartphone users has increased in recent years. MHapps and other technology-based solutions have the potential to play an important part in the future of mental health care; however, there is no single guide for the development of evidence-based MHapps. Many currently available MHapps lack features that would greatly improve their functionality, or include features that are not optimized. Furthermore, MHapp developers rarely conduct or publish trial-based experimental validation of their apps. Indeed, a previous systematic review revealed a complete lack of trial-based evidence for many of the hundreds of MHapps available.\n\n\nOBJECTIVE\nTo guide future MHapp development, a set of clear, practical, evidence-based recommendations is presented for MHapp developers to create better, more rigorous apps.\n\n\nMETHODS\nA literature review was conducted, scrutinizing research across diverse fields, including mental health interventions, preventative health, mobile health, and mobile app design.\n\n\nRESULTS\nSixteen recommendations were formulated. Evidence for each recommendation is discussed, and guidance on how these recommendations might be integrated into the overall design of an MHapp is offered. Each recommendation is rated on the basis of the strength of associated evidence. It is important to design an MHapp using a behavioral plan and interactive framework that encourages the user to engage with the app; thus, it may not be possible to incorporate all 16 recommendations into a single MHapp.\n\n\nCONCLUSIONS\nRandomized controlled trials are required to validate future MHapps and the principles upon which they are designed, and to further investigate the recommendations presented in this review. Effective MHapps are required to help prevent mental health problems and to ease the burden on health systems.",
"title": ""
},
{
"docid": "70f35b19ba583de3b9942d88c94b9148",
"text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.",
"title": ""
},
{
"docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "9f76ca13fd4e61905f82a1009982adb9",
"text": "Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manuallysegmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "27dc2972f39f613b08217c6b2486220b",
"text": "Handwritten character recognition is always an interesting area of pattern recognition for research in the field of image processing. Many researchers have presented their work in this area and still research is undergoing to achieve high accuracy. This paper is mainly concerned for the people who are working on the character recognition and review of work to recognize handwritten character for various Indian languages. The objective of this paper is to describe the set of preprocessing, segmentation, feature extraction and classification techniques.",
"title": ""
},
{
"docid": "3b75d996f21af68a0cd4d49ef7d4e10e",
"text": "Observational studies suggest that including men in reproductive health interventions can enhance positive health outcomes. A randomized controlled trial was designed to test the impact of involving male partners in antenatal health education on maternal health care utilization and birth preparedness in urban Nepal. In total, 442 women seeking antenatal services during second trimester of pregnancy were randomized into three groups: women who received education with their husbands, women who received education alone and women who received no education. The education intervention consisted of two 35-min health education sessions. Women were followed until after delivery. Women who received education with husbands were more likely to attend a post-partum visit than women who received education alone [RR = 1.25, 95% CI = (1.01, 1.54)] or no education [RR = 1.29, 95% CI = (1.04, 1.60)]. Women who received education with their husbands were also nearly twice as likely as control group women to report making >3 birth preparations [RR = 1.99, 95% CI = (1.10, 3.59)]. Study groups were similar with respect to attending the recommended number of antenatal care checkups, delivering in a health institution or having a skilled provider at birth. These data provide evidence that educating pregnant women and their male partners yields a greater net impact on maternal health behaviors compared with educating women alone.",
"title": ""
}
] |
scidocsrr
|
9bb02d8f26d1a73a2e11ef6a8c6fe2b9
|
A CPPS Architecture approach for Industry 4.0
|
[
{
"docid": "13c0f622205a67e2d026e9eb097df0e3",
"text": "This paper presents an approach to how existing production systems that are not Industry 4.0-ready can be expanded to participate in an Industry 4.0 factory. Within this paper, a concept is presented how production systems can be discovered and included into an Industry 4.0 (I4.0) environment, even though they did not have I4.0interfaces when they have been manufactured. The concept is based on a communication gateway and an information server. Besides the concept itself, this paper presents a validation that demonstrates applicability of the developed concept.",
"title": ""
}
] |
[
{
"docid": "45a92ab90fabd875a50229921e99dfac",
"text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.",
"title": ""
},
{
"docid": "59db435e906db2c198afdc5cc7c7de2c",
"text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.",
"title": ""
},
{
"docid": "1b29aa20e82dba0992634d3a178ad0c5",
"text": "This paper presents the approach developed for the partial MASPS level document DO-344 “Operational and Functional Requirements and Safety Objectives” for the UAS standards. Previous RTCA1 work led to the production of an Operational Services Environment Description document, from which operational requirements were extracted and refined. Following the principles described in the Department of Defense Architecture Framework, the overall UAS architecture and major interfaces were defined. Interacting elements included the unmanned aircraft (airborne component), the ground control station (ground component), the Air Traffic Control (ATC), the Air Traffic Service besides ATC, other traffic in the NAS, and the UAS ground support. Furthering the level of details, a functional decomposition was produced prior to the allocation onto the UAS architecture. These functions cover domains including communication, control, navigation, surveillance, and health monitoring. The communication function addressed all elements in the UAS connected with external interfaces: the airborne component, the ground component, the ATC, the other traffic and the ground support. The control function addressed the interface between the ground control station and the unmanned aircraft for the purpose of flying in the NAS. The navigation function covered the capability to determine and fly a trajectory using conventional and satellite based navigation means. The surveillance function addressed the capability to detect and avoid collisions with hazards, including other traffic, terrain and obstacles, and weather. Finally, the health monitoring function addressed the capability to oversee UAS systems, probe for their status and feedback issues related to degradation or loss of performance. An additional function denoted `manage' was added to the functional decomposition to complement the heath monitoring coverage and included manual modes for the operation of the UAS.",
"title": ""
},
{
"docid": "f8c6906f4d0deb812e42aaaff457a6d9",
"text": "By the early 1900s, Euro-Americans had extirpated gray wolves (Canis lupus) from most of the contiguous United States. Yellowstone National Park was not immune to wolf persecution and by the mid-1920s they were gone. After seven decades of absence in the park, gray wolves were reintroduced in 1995–1996, again completing the large predator guild (Smith et al. 2003). Yellowstone’s ‘‘experiment in time’’ thus provides a rare opportunity for studying potential cascading effects associated with the extirpation and subsequent reintroduction of an apex predator. Wolves represent a particularly important predator of large mammalian prey in northern hemisphere ecosystems by virtue of their group hunting and year-round activity (Peterson et al. 2003) and can have broad top-down effects on the structure and functioning of these systems (Miller et al. 2001, Soulé et al. 2003, Ray et al. 2005). If a tri-trophic cascade involving wolves–elk (Cervus elaphus)–plants is again underway in northern Yellowstone, theory would suggest two primary mechanisms: (1) density mediation through prey mortality and (2) trait mediation involving changes in prey vigilance, habitat use, and other behaviors (Brown et al. 1999, Berger 2010). Both predator-caused reductions in prey numbers and fear responses they elicit in prey can lead to cascading trophic-level effects across a wide range of biomes (Beschta and Ripple 2009, Laundré et al. 2010, Terborgh and Estes 2010). Thus, the occurrence of a trophic cascade could have important implications not only to the future structure and functioning of northern Yellowstone’s ecosystems but also for other portions of the western United States where wolves have been reintroduced, are expanding their range, or remain absent. However, attempting to identify the occurrence of a trophic cascade in systems with large mammalian predators, as well as the relative importance of density and behavioral mediation, represents a continuing scientific challenge. In Yellowstone today, there is an ongoing effort by various researchers to evaluate ecosystem processes in the park’s two northern ungulate winter ranges: (1) the ‘‘Northern Range’’ along the northern edge of the park (NRC 2002, Barmore 2003) and (2) the ‘‘Upper Gallatin Winter Range’’ along the northwestern corner of the park (Ripple and Beschta 2004b). Previous studies in northern Yellowstone have generally found that elk, in the absence of wolves, caused a decrease in aspen (Populus tremuloides) recruitment (i.e., the growth of seedlings or root sprouts above the browse level of elk). Within this context, Kauffman et al. (2010) initiated a study to provide additional understanding of factors such as elk density, elk behavior, and climate upon historical and contemporary patterns of aspen recruitment in the park’s Northern Range. Like previous studies, Kauffman et al. (2010) concluded that, irrespective of historical climatic conditions, elk have had a major impact on long-term aspen communities after the extirpation of wolves. But, unlike other studies that have seen improvement in the growth or recruitment of young aspen and other browse species in recent years, Kauffman et al. (2010) concluded in their Abstract: ‘‘. . . our estimates of relative survivorship of young browsable aspen indicate that aspen are not currently recovering in Yellowstone, even in the presence of a large wolf population.’’ In the interest of clarifying the potential role of wolves on woody plant community dynamics in Yellowstone’s northern winter ranges, we offer several counterpoints to the conclusions of Kauffman et al. (2010). We do so by readdressing several tasks identified in their Introduction (p. 2744): (1) the history of aspen recruitment failure, (2) contemporary aspen recruitment, and (3) aspen recruitment and predation risk. Task 1 covers the period when wolves were absent from Yellowstone and tasks 2 and 3 focus on the period when wolves were again present. We also include some closing comments regarding trophic cascades and ecosystem recovery. 1. History of aspen recruitment failure.—Although records of wolf and elk populations in northern Yellowstone are fragmentary for the early 1900s, the Northern Range elk population averaged ;10 900 animals (7.3 elk/km; Fig. 1A) as the last wolves were being removed in the mid 1920s. Soon thereafter increased browsing by elk of aspen and other woody species was noted in northern Yellowstone’s winter ranges (e.g., Rush 1932, Lovaas 1970). In an attempt to reduce the effects this large herbivore was having on vegetation, soils, and wildlife habitat in the Northern Manuscript received 13 January 2011; revised 10 June 2011; accepted 20 June 2011. Corresponding Editor: C. C. Wilmers. 1 Department of Forest Ecosystems and Society, Oregon State University, Corvallis, Oregon 97331 USA. 2 E-mail: Robert.Beschta@oregonstate.edu",
"title": ""
},
{
"docid": "2d822e022363b371f62a803d79029f09",
"text": "AIM\nTo explore the relationship between sources of stress and psychological burn-out and to consider the moderating and mediating role played sources of stress and different coping resources on burn-out.\n\n\nBACKGROUND\nMost research exploring sources of stress and coping in nursing students construes stress as psychological distress. Little research has considered those sources of stress likely to enhance well-being and, by implication, learning.\n\n\nMETHOD\nA questionnaire was administered to 171 final year nursing students. Questions were asked which measured sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effect on burn-out.\n\n\nFINDINGS\nThe sources of stress likely to lead to distress were more often predictors of well-being than sources of stress likely to lead to positive, eustress states. However, placement experience was an important source of stress likely to lead to eustress. Self-efficacy, dispositional control and support were other important predictors. Avoidance coping was the strongest predictor of burn-out and, even if used only occasionally, it can have an adverse effect on burn-out. Initiatives to promote support and self-efficacy are likely to have the more immediate benefits in enhancing student well-being.\n\n\nCONCLUSION\nNurse educators need to consider how course experiences contribute not just to potential distress but to eustress. How educators interact with their students and how they give feedback offers important opportunities to promote self-efficacy and provide valuable support. Peer support is a critical coping resource and can be bolstered through induction and through learning and teaching initiatives.",
"title": ""
},
{
"docid": "14b7c4f8a3fa7089247f1d4a26186c5d",
"text": "System Dynamics is often used for dealing with dynamically complex issues that are also uncertain. This paper reviews how uncertainty is dealt with in System Dynamics modeling, where uncertainties are located in models, which types of uncertainties are dealt with, and which levels of uncertainty could be handled. Shortcomings of System Dynamics and its practice in dealing with uncertainty are distilled from this review and reframed as opportunities. Potential opportunities for dealing with uncertainty in System Dynamics that are discussed here include (i) dealing explicitly with difficult sorts of uncertainties, (ii) using multi-model approaches for dealing with alternative assumptions and multiple perspectives, (iii) clearly distinguishing sensitivity analysis from uncertainty analysis and using them for different purposes, (iv) moving beyond invariant model boundaries, (v) using multi-method approaches, advanced techniques and new tools, and (vi) further developing and using System Dynamics strands for dealing with deep uncertainty.",
"title": ""
},
{
"docid": "8582c4a040e4dec8fd141b00eaa45898",
"text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "aaf1aac789547c1bf2f918368b43c955",
"text": "Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g. strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture. Similar sections of music can be detected by clustering segments with similar average textures. The repetition of a sequence of music often marks a logical segment. Repeated phrases and hierarchical structures can be discovered by finding similar sequences of feature vectors within a piece of music. Structure analysis can be used to construct music summaries and to assist music browsing. Introduction Probably everyone would agree that music has structure, but most of the interesting musical information that we perceive lies hidden below the complex surface of the audio signal. From this signal, human listeners perceive vocal and instrumental lines, orchestration, rhythm, harmony, bass lines, and other features. Unfortunately, music audio signals have resisted our attempts to extract this kind of information. Researchers are making progress, but so far, computers have not come near to human levels of performance in detecting notes, processing rhythms, or identifying instruments in a typical (polyphonic) music audio texture. On a longer time scale, listeners can hear structure including the chorus and verse in songs, sections in other types of music, repetition, and other patterns. One might think that without the reliable detection and identification of short-term features such as notes and their sources, that it would be impossible to deduce any information whatsoever about even higher levels of abstraction. Surprisingly, it is possible to automatically detect a great deal of information concerning music structure. For example, it is possible to label the structure of a song as AABA, meaning that opening material (the “A” part) is repeated once, then contrasting material (the “B” part) is played, and then the opening material is played again at the end. This structural description may be deduced from low-level audio signals. Consequently, a computer might locate the “chorus” of a song without having any representation of the melody or rhythm that characterizes the chorus. Underlying almost all work in this area is the concept that structure is induced by the repetition of similar material. This is in contrast to, say, speech recognition, where there is a common understanding of words, their structure, and their meaning. A string of unique words can be understood using prior knowledge of the language. Music, however, has no language or dictionary (although there are certainly known forms and conventions). In general, structure can only arise in music through repetition or systematic transformations of some kind. Repetition implies there is some notion of similarity. Similarity can exist between two points in time (or at least two very short time intervals), similarity can exist between two sequences over longer time intervals, and similarity can exist between the longer-term statistical behaviors of acoustical features. Different approaches to similarity will be described. Similarity can be used to segment music: contiguous regions of similar music can be grouped together into segments. Segments can then be grouped into clusters. The segmentation of a musical work and the grouping of these segments into clusters is a form of analysis or “explanation” of the music. R. Dannenberg and M. Goto Music Structure 16 April 2005 2 Features and Similarity Measures A variety of approaches are used to measure similarity, but it should be clear that a direct comparison of the waveform data or individual samples will not be useful. Large differences in waveforms can be imperceptible, so we need to derive features of waveform data that are more perceptually meaningful and compare these features with an appropriate measure of similarity. Feature Vectors for Spectrum, Texture, and Pitch Different features emphasize different aspects of the music. For example, mel-frequency cepstral coefficients (MFCCs) seem to work well when the general shape of the spectrum but not necessarily pitch information is important. MFCCs generally capture overall “texture” or timbral information (what instruments are playing in what general pitch range), but some pitch information is captured, and results depend upon the number of coefficients used as well as the underlying musical signal. When pitch is important, e.g. when searching for similar harmonic sequences, the chromagram is effective. The chromagram is based on the idea that tones separated by octaves have the same perceived value of chroma (Shepard 1964). Just as we can describe the chroma aspect of pitch, the short term frequency spectrum can be restructured into the chroma spectrum by combining energy at different octaves into just one octave. The chroma vector is a discretized version of the chroma spectrum where energy is summed into 12 log-spaced divisions of the octave corresponding to pitch classes (C, C#, D, ... B). By analogy to the spectrogram, the discrete chromagram is a sequence of chroma vectors. It should be noted that there are several variations of the chromagram. The computation typically begins with a short-term Fourier transform (STFT) which is used to compute the magnitude spectrum. There are different ways to “project” this onto the 12-element chroma vector. Each STFT bin can be mapped directly to the most appropriate chroma vector element (Bartsch and Wakefield 2001), or the STFT bin data can be interpolated or windowed to divide the bin value among two neighboring vector elements (Goto 2003a). Log magnitude values can be used to emphasize the presence of low-energy harmonics. Values can also be averaged, summed, or the vector can be computed to conserve the total energy. The chromagram can also be computed by using the Wavelet transform. Regardless of the exact details, the primary attraction of the chroma vector is that, by ignoring octaves, the vector is relatively insensitive to overall spectral energy distribution and thus to timbral variations. However, since fundamental frequencies and lower harmonics of tones feature prominently in the calculation of the chroma vector, it is quite sensitive to pitch class content, making it ideal for the detection of similar harmonic sequences in music. While MFCCs and chroma vectors can be calculated from a single short term Fourier transform, features can also be obtained from longer sequences of spectral frames. Tzanetakis and Cook (1999) use means and variances of a variety of features in a one second window. The features include the spectral centroid, spectral rolloff, spectral flux, and RMS energy. Peeters, La Burthe, and Rodet (2002) describe “dynamic” features, which model the variation of the short term spectrum over windows of about one second. In this approach, the audio signal is passed through a bank of Mel filters. The time-varying magnitudes of these filter outputs are each analyzed by a short term Fourier transform. The resulting set of features, the Fourier coefficients from each Mel filter output, is large, so a supervised learning scheme is used to find features that maximize the mutual information between feature values and hand-labeled music structures. Measures of Similarity Given a feature vector such as the MFCC or chroma vector, some measure of similarity is needed. One possibility is to compute the (dis)similarity using the Euclidean distance between feature vectors. Euclidean distance will be dependent upon feature magnitude, which is often a measure of the overall R. Dannenberg and M. Goto Music Structure 16 April 2005 3 music signal energy. To avoid giving more weight to the louder moments of music, feature vectors can be normalized, for example, to a mean of zero and a standard deviation of one or to a maximum element of one. Alternatively, similarity can be measured using the scalar (dot) product of the feature vectors. This measure will be larger when feature vectors have a similar direction. As with Euclidean distance, the scalar product will also vary as a function of the overall magnitude of the feature vectors. If the dot product is normalized by the feature vector magnitudes, the result is equal to the cosine of the angle between the vectors. If the feature vectors are first normalized to have a mean of zero, the cosine angle is equivalent to the correlation, another measure that has been used with success. Lu, Wang, and Zhang (Lu, Wang, and Zhang 2004) use a constant-Q transform (CQT), and found that CQT outperforms chroma and MFCC features using a cosine distance measure. They also introduce a “structure-based” distance measure that takes into account the harmonic structure of spectra to emphasize pitch similarity over timbral similarity, resulting in additional improvement in a music structure analysis task. Similarity can be calculated between individual feature vectors, as suggested above, but similarity can also be computed over a window of feature vectors. The measure suggested by Foote (1999) is vector correlation:",
"title": ""
},
{
"docid": "fe012505cc7a2ea36de01fc92924a01a",
"text": "The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (< 10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.",
"title": ""
},
{
"docid": "1fa6ee7cf37d60c182aa7281bd333649",
"text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.",
"title": ""
},
{
"docid": "1b4019d0f2eb9e392b5dfeea8370b625",
"text": "Intellectual capital is becoming the preeminent resource for creating economic wealth. Tangible assets such as property, plant, and equipment continue to be important factors in the production of both goods and services. However, their relative importance has decreased through time as the importance of intangible, knowledge-based assets has increased. This shift in importance has raised a number of accounting questions critical for managing assets such as brand names, trade secrets, production processes, distribution channels, and work-related competencies. This paper develops a working definition of intellectual capital and a framework for identifying and classifying the various components of intellectual capital. In addition, methods of measuring intellectual capital at both the individual-component and organization levels are presented. This provides an exploratory foundation for accounting systems and processes useful for meaningful management of intellectual assets. INTELLECTUAL CAPITAL AND ITS MEASUREMENT",
"title": ""
},
{
"docid": "fcd30a667cb2f4e89d9174cc37ac698c",
"text": "v TABLE OF CONTENTS vii",
"title": ""
},
{
"docid": "4d91ac570bec700f78521754c7e5d0ce",
"text": "Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. The basic concept of CAD is to provide a computer output as a second opinion to assist radiologists' image interpretation by improving the accuracy and consistency of radiological diagnosis and also by reducing the image reading time. In this article, a number of CAD schemes are presented, with emphasis on potential clinical applications. These schemes include: (1) detection and classification of lung nodules on digital chest radiographs; (2) detection of nodules in low dose CT; (3) distinction between benign and malignant nodules on high resolution CT; (4) usefulness of similar images for distinction between benign and malignant lesions; (5) quantitative analysis of diffuse lung diseases on high resolution CT; and (6) detection of intracranial aneurysms in magnetic resonance angiography. Because CAD can be applied to all imaging modalities, all body parts and all kinds of examinations, it is likely that CAD will have a major impact on medical imaging and diagnostic radiology in the 21st century.",
"title": ""
},
{
"docid": "6d882c210047b3851cb0514083cf448e",
"text": "Child sexual abuse is a serious global problem and has gained public attention in recent years. Due to the popularity of digital cameras, many perpetrators take images of their sexual activities with child victims. Traditionally, it was difficult to use cutaneous vascular patterns for forensic identification, because they were nearly invisible in color images. Recently, this limitation was overcome using a computational method based on an optical model to uncover vein patterns from color images for forensic verification. This optical-based vein uncovering (OBVU) method is sensitive to the power of the illuminant and does not utilize skin color in images to obtain training parameters to optimize the vein uncovering performance. Prior publications have not included an automatic vein matching algorithm for forensic identification. As a result, the OBVU method only supported manual verification. In this paper, we propose two new schemes to overcome limitations in the OBVU method. Specifically, a color optimization scheme is used to derive the range of biophysical parameters to obtain training parameters and an automatic intensity adjustment scheme is used to enhance the robustness of the vein uncovering algorithm. We also developed an automatic matching algorithm for vein identification. This algorithm can handle rigid and non-rigid deformations and has an explicit pruning function to remove outliers in vein patterns. The proposed algorithms were examined on a database with 300 pairs of color and near infrared (NIR) images collected from the forearms of 150 subjects. The experimental results are encouraging and indicate that the proposed vein uncovering algorithm performs better than the OBVU method and that the uncovered patterns can potentially be used for automatic criminal and victim identification.",
"title": ""
},
{
"docid": "8f7d2c365f6272a7e681a48b500299c7",
"text": "In today's world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely Twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people's opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets, a probabilistic model based on Bayes' theorem was used for spelling correction, which is overlooked in other research studies. This paper also highlights a comparison between the results obtained by exploiting the following machine learning algorithms: Naïve Bayes and Support Vector Machine and an Artificial Neural Network model: Multilayer Perceptron. Furthermore, a contrast has been presented between four different kernels of SVM: RBF, linear, polynomial and sigmoid.",
"title": ""
},
{
"docid": "98ca1c0100115646bb14a00f19c611a5",
"text": "The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various",
"title": ""
},
{
"docid": "8410b8b76ab690ed4389efae15608d13",
"text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).",
"title": ""
},
{
"docid": "fc12ac921348a77714bff6ec39b0e052",
"text": "For decades, nurses (RNs) have identified barriers to providing the optimal pain management that children deserve; yet no studies were found in the literature that assessed these barriers over time or across multiple pediatric hospitals. The purpose of this study was to reassess barriers that pediatric RNs perceive, and how they describe optimal pain management, 3 years after our initial assessment, collect quantitative data regarding barriers identified through comments during our initial assessment, and describe any changes over time. The Modified Barriers to Optimal Pain Management survey was used to measure barriers in both studies. RNs were invited via e-mail to complete an electronic survey. Descriptive and inferential statistics were used to compare results over time. Four hundred forty-two RNs responded, representing a 38% response rate. RNs continue to describe optimal pain management most often in terms of patient comfort and level of functioning. While small changes were seen for several of the barriers, the most significant barriers continued to involve delays in the availability of medications, insufficient physician medication orders, and insufficient orders and time allowed to pre-medicate patients before procedures. To our knowledge, this is the first study to reassess RNs' perceptions of barriers to pediatric pain management over time. While little change was seen in RNs' descriptions of optimal pain management or in RNs' perceptions of barriers, no single item was rated as more than a moderate barrier to pain management. The implications of these findings are discussed in the context of improvement strategies.",
"title": ""
}
] |
scidocsrr
|
9518633a2bcfadd2191f5cde5e44c86b
|
Detecting Nastiness in Social Media
|
[
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
}
] |
[
{
"docid": "23ff4a40f9a62c8a26f3cc3f8025113d",
"text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.",
"title": ""
},
{
"docid": "990fb61d1135b05f88ae02eb71a6983f",
"text": "Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.",
"title": ""
},
{
"docid": "880d6636a2939ee232da5c293f29ae44",
"text": "BACKGROUND\nMicrocannulas with blunt tips for filler injections have recently been developed for use with dermal fillers. Their utility, ease of use, cosmetic outcomes, perceived pain, and satisfaction ratings amongst patients in terms of comfort and aesthetic outcomes when compared to sharp hypodermic needles has not previously been investigated.\n\n\nOBJECTIVE\nTo compare injections of filler with microcannulas versus hypodermic needles in terms of ease of use, amount of filler required to achieve desired aesthetic outcome, perceived pain by patient, adverse events such as bleeding and bruising and to demonstrate the advantages of single-port injection technique with the blunt-tip microcannula.\n\n\nMATERIALS AND METHODS\nNinety-five patients aged 30 to 76 years with a desire to augment facial, décolleté, and hand features were enrolled in the study. Subjects were recruited in a consecutive manner from patients interested in receiving dermal filler augmentation. Each site was cleaned with alcohol before injection. Anesthesia was obtained with a topical anesthesia peel off mask of lidocaine/tetracaine. Cross-linked hyaluronic acid (20 mg to 28 mg per mL) was injected into the mid-dermis. The microcannula or a hypodermic needle was inserted the entire length of the fold, depression or lip and the filler was injected in a linear retrograde fashion. The volume injected was variable, depending on the depth and the extent of the defect. The injecting physician assessed the ease of injection. Subjects used the Visual Analog Scale (0-10) for pain assessment. Clinical efficacy was assessed by the patients and the investigators immediately after injection, and at one and six months after injection using the Global Aesthetic Improvement Scale (GAIS) and digital photography.\n\n\nRESULTS\nOverall, the Global Aesthetic Improvements Scale (GAIS) results were excellent (55%), moderate (35%), and somewhat improved (10%) one month after the procedure, decreasing to 23%, 44%, and 33%, respectively, at the six month evaluation. There was no significant differences in the GAIS score between the microcannula and the hypodermic needle. However, the Visual Analog Scale for pain assessment during the injections was quite different. The pain was described as 3 (mild) for injections with the microcannula, increasing to 6 (moderate) for injections with the hypodermic needle. Bruising and ecchymosis was more marked following use of the hypodermic needle.\n\n\nCONCLUSION\nUsing the blunt-tip microcannula as an alternative to the hypodermic needles has simplified filler injections and produced less bruising, echymosis, and pain with faster recovery.",
"title": ""
},
{
"docid": "bfc85b95287e4abc2308849294384d1e",
"text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.",
"title": ""
},
{
"docid": "e3e75689d9425ea04db2de83bbfc9102",
"text": "Recently, with the advent of location-based social networking services (LBSNs), travel planning and location-aware information recommendation based on LBSNs have attracted much research attention. In this paper, we study the impact of social relations hidden in LBSNs, i.e., The social influence of friends. We propose a new social influence-based user recommender framework (SIR) to discover the potential value from reliable users (i.e., Close friends and travel experts). Explicitly, our SIR framework is able to infer influential users from an LBSN. We claim to capture the interactions among virtual communities, physical mobility activities and time effects to infer the social influence between user pairs. Furthermore, we intend to model the propagation of influence using diffusion-based mechanism. Moreover, we have designed a dynamic fusion framework to integrate the features mined into a united follow probability score. Finally, our SIR framework provides personalized top-k user recommendations for individuals. To evaluate the recommendation results, we have conducted extensive experiments on real datasets (i.e., The Go Walla dataset). The experimental results show that the performance of our SIR framework is better than the state-of the-art user recommendation mechanisms in terms of accuracy and reliability.",
"title": ""
},
{
"docid": "2af36afd2440a4940873fef1703aab3f",
"text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.",
"title": ""
},
{
"docid": "b9e765f42f3cf099ff3de0c7c00bddb4",
"text": "In general, meta-parameters in a reinforcement learning system, such as a learning rate and a discount rate, are empirically determined and fixed during learning. When an external environment is therefore changed, the sytem cannot adapt itself to the variation. Meanwhile, it is suggested that the biological brain might conduct reinforcement learning and adapt itself to the external environment by controlling neuromodulators corresponding to the meta-parameters. In the present paper, based on the above suggestion, a method to adjust metaparameters using a temporal difference (TD) error is proposed. Through various computer simulations using a maze search problem and an inverted pendulum control problem, it is verified that the proposed method could appropriately adjust meta-parameters according to the variation of the external environment.",
"title": ""
},
{
"docid": "8801d5a28a098e1879d60838c1c9f108",
"text": "On-line photo sharing services allow users to share their touristic experiences. Tourists can publish photos of interesting locations or monuments visited, and they can also share comments, annotations, and even the GPS traces of their visits. By analyzing such data, it is possible to turn colorful photos into metadata-rich trajectories through the points of interest present in a city. In this paper we propose a novel algorithm for the interactive generation of personalized recommendations of touristic places of interest based on the knowledge mined from photo albums and Wikipedia. The distinguishing features of our approach are multiple. First, the underlying recommendation model is built fully automatically in an unsupervised way and it can be easily extended with heterogeneous sources of information. Moreover, recommendations are personalized according to the places previously visited by the user. Finally, such personalized recommendations can be generated very efficiently even on-line from a mobile device.",
"title": ""
},
{
"docid": "a4a15096e116a6afc2730d1693b1c34f",
"text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.",
"title": ""
},
{
"docid": "71f388d3a2b50856c5529667df39602c",
"text": "Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.",
"title": ""
},
{
"docid": "8e23ef656b501814fc44c609feebe823",
"text": "This paper proposes an approach for segmentation and semantic labeling of RGBD data based on the joint usage of geometrical clues and deep learning techniques. An initial oversegmentation is performed using spectral clustering and a set of NURBS surfaces is then fitted on the extracted segments. The input data are then fed to a Convolutional Neural Network (CNN) together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a per-pixel descriptor vector for each sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The couples of adjacent segments with higher similarity according to the CNN features are considered for merging and the NURBS surface fitting accuracy is used in order to understand if the selected couples correspond to a single surface. By combining the obtained segmentation with the descriptors from the CNN a set of labeled segments is obtained. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation and labeling.",
"title": ""
},
{
"docid": "a83931702879dc41a3d7007ac4c32716",
"text": "We propose a query-based generative model for solving both tasks of question generation (QG) and question answering (QA). The model follows the classic encoderdecoder framework. The encoder takes a passage and a query as input then performs query understanding by matching the query with the passage from multiple perspectives. The decoder is an attention-based Long Short Term Memory (LSTM) model with copy and coverage mechanisms. In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage. During the training stage, we leverage a policy-gradient reinforcement learning algorithm to overcome exposure bias, a major problem resulted from sequence learning with cross-entropy loss. For the QG task, our experiments show higher performances than the state-of-the-art results. When used as additional training data, the automatically generated questions even improve the performance of a strong extractive QA system. In addition, our model shows better performance than the state-of-the-art baselines of the generative QA task.",
"title": ""
},
{
"docid": "af740d54f1b6d168500934a089a1adc8",
"text": "Abstract In this paper, unsteady laminar flow around a circular cylinder has been studied. Navier-stokes equations solved by Simple C algorithm exerted to specified structured and unstructured grids. Equations solved by staggered method and discretization of those done by upwind method. The mean drag coefficient, lift coefficient and strouhal number are compared from current work at three different Reynolds numbers with experimental and numerical values.",
"title": ""
},
{
"docid": "41de353ad7e48d5f354893c6045394e2",
"text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.",
"title": ""
},
{
"docid": "0943667f7424875ea7a42dc7d0e422b4",
"text": "This paper introduces a novel concept of an air bearing test bench for CubeSat ground testing together with the corresponding dynamic parameter identification method. Contrary to existing air bearing test benches, the proposed concept allows three degree-of-freedom unlimited rotations and minimizes the influence of the test bench on the tested CubeSat. These advantages are made possible by the use of a robotic wrist which rotates air bearings in order to make them follow the CubeSat motion. Another keystone of the test bench is an accurate balancing of the tested CubeSat. Indeed, disturbing factors acting on the satellite shall be minimized, the most significant one being the gravity torque. An efficient balancing requires the CubeSat center of mass position to be accurately known. Usual techniques of dynamic parameter identification cannot be directly applied because of the frictionless suspension of the CubeSat in the test bench and, accordingly, due to the lack of external actuation. In this paper, a new identification method is proposed. This method does not require any external actuation and is based on the sampling of free oscillating motions of the CubeSat mounted on the test bench.",
"title": ""
},
{
"docid": "2ae96a524ba3b6c43ea6bfa112f71a30",
"text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.",
"title": ""
},
{
"docid": "1c4942dc3cccf7dc2424be450d9be143",
"text": "PURPOSE\nTo perform a large-scale systematic comparison of the accuracy of all commonly used perfusion computed tomography (CT) data postprocessing methods in the definition of infarct core and penumbra in acute stroke.\n\n\nMATERIALS AND METHODS\nThe collection of data for this study was approved by the institutional ethics committee, and all patients gave informed consent. Three hundred fourteen patients with hemispheric ischemia underwent perfusion CT within 6 hours of stroke symptom onset and magnetic resonance (MR) imaging at 24 hours. CT perfusion maps were generated by using six different postprocessing methods. Pixel-based analysis was used to calculate sensitivity and specificity of different perfusion CT thresholds for the penumbra and infarct core with each postprocessing method, and receiver operator characteristic (ROC) curves were plotted. Area under the ROC curve (AUC) analysis was used to define the optimum threshold.\n\n\nRESULTS\nDelay-corrected singular value deconvolution (SVD) with a delay time of more than 2 seconds most accurately defined the penumbra (AUC = 0.86, P = .046, mean volume difference between acute perfusion CT and 24-hour diffusion-weighted MR imaging = 1.7 mL). A double core threshold with a delay time of more than 2 seconds and cerebral blood flow less than 40% provided the most accurate definition of the infarct core (AUC = 0.86, P = .038). The other SVD measures (block circulant, nondelay corrected) were more accurate than non-SVD methods.\n\n\nCONCLUSION\nThis study has shown that there is marked variability in penumbra and infarct prediction among various deconvolution techniques and highlights the need for standardization of perfusion CT in stroke.\n\n\nSUPPLEMENTAL MATERIAL\nhttp://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120971/-/DC1.",
"title": ""
}
] |
scidocsrr
|
28ce6219b0284ea5fe22f5219f92a165
|
Competitive Data Trading in Wireless-Powered Internet of Things (IoT) Crowdsensing Systems with Blockchain
|
[
{
"docid": "87b7b05c6af2fddb00f7b1d3a60413c1",
"text": "Mobile crowdsensing (MCS) is a human-driven Internet of Things service empowering citizens to observe the phenomena of individual, community, or even societal value by sharing sensor data about their environment while on the move. Typical MCS service implementations utilize cloud-based centralized architectures, which consume a lot of computational resources and generate significant network traffic, both in mobile networks and toward cloud-based MCS services. Mobile edge computing (MEC) is a natural choice to distribute MCS solutions by moving computation to network edge, since an MEC-based architecture enables significant performance improvements due to the partitioning of problem space based on location, where real-time data processing and aggregation is performed close to data sources. This in turn reduces the associated traffic in mobile core and will facilitate MCS deployments of massive scale. This paper proposes an edge computing architecture adequate for massive scale MCS services by placing key MCS features within the reference MEC architecture. In addition to improved performance, the proposed architecture decreases privacy threats and permits citizens to control the flow of contributed sensor data. It is adequate for both data analytics and real-time MCS scenarios, in line with the 5G vision to integrate a huge number of devices and enable innovative applications requiring low network latency. Our analysis of service overhead introduced by distributed architecture and service reconfiguration at network edge performed on real user traces shows that this overhead is controllable and small compared with the aforementioned benefits. When enhanced by interoperability concepts, the proposed architecture creates an environment for the establishment of an MCS marketplace for bartering and trading of both raw sensor data and aggregated/processed information.",
"title": ""
},
{
"docid": "2c226c7be6acf725190c72a64bfcdf91",
"text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.",
"title": ""
}
] |
[
{
"docid": "bd44d77e255837497d5026e87a46548d",
"text": "Social media technologies let people connect by creating and sharing content. We examine the use of Twitter by famous people to conceptualize celebrity as a practice. On Twitter, celebrity is practiced through the appearance and performance of ‘backstage’ access. Celebrity practitioners reveal what appears to be personal information to create a sense of intimacy between participant and follower, publicly acknowledge fans, and use language and cultural references to create affiliations with followers. Interactions with other celebrity practitioners and personalities give the impression of candid, uncensored looks at the people behind the personas. But the indeterminate ‘authenticity’ of these performances appeals to some audiences, who enjoy the game playing intrinsic to gossip consumption. While celebrity practice is theoretically open to all, it is not an equalizer or democratizing discourse. Indeed, in order to successfully practice celebrity, fans must recognize the power differentials intrinsic to the relationship.",
"title": ""
},
{
"docid": "36a538b833de4415d12cd3aa5103cf9b",
"text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.",
"title": ""
},
{
"docid": "51c82ab631167a61e553e1ab8e34a385",
"text": "The social and political context of sexual identity development in the United States has changed dramatically since the mid twentieth century. Same-sex attracted individuals have long needed to reconcile their desire with policies of exclusion, ranging from explicit outlaws on same-sex activity to exclusion from major social institutions such as marriage. This paper focuses on the implications of political exclusion for the life course of individuals with same-sex desire through the analytic lens of narrative. Using illustrative evidence from a study of autobiographies of gay men spanning a 60-year period and a study of the life stories of contemporary same-sex attracted youth, we detail the implications of historic silence, exclusion, and subordination for the life course.",
"title": ""
},
{
"docid": "ee9730fa0fde945d70130bcf33960608",
"text": "An operational definition offered in this paper posits learning as a multi-dimensional and multi-phase phenomenon occurring when individuals attempt to solve what they view as a problem. To model someone’s learning accordingly to the definition, it suffices to characterize a particular sequence of that person’s disequilibrium–equilibrium phases in terms of products of a particular mental act, the characteristics of the mental act inferred from the products, and intellectual and psychological needs that instigate or result from these phases. The definition is illustrated by analysis of change occurring in three thinking-aloud interviews with one middle-school teacher. The interviews were about the same task: “Make up a word problem whose solution may be found by computing 4/5 divided by 2/3.” © 2010 Elsevier Inc. All rights reserved. An operational definition is a showing of something—such as a variable, term, or object—in terms of the specific process or set of validation tests used to determine its presence and quantity. Properties described in this manner must be publicly accessible so that persons other than the definer can independently measure or test for them at will. An operational definition is generally designed to model a conceptual definition (Wikipedia)",
"title": ""
},
{
"docid": "e3b7d2c4cd3e3d860db8d4751c9eed25",
"text": "While recommender systems tell users what items they might like, explanations of recommendations reveal why they might like them. Explanations provide many benefits, from improving user satisfaction to helping users make better decisions. This paper introduces tagsplanations, which are explanations based on community tags. Tagsplanations have two key components: tag relevance, the degree to which a tag describes an item, and tag preference, the user's sentiment toward a tag. We develop novel algorithms for estimating tag relevance and tag preference, and we conduct a user study exploring the roles of tag relevance and tag preference in promoting effective tagsplanations. We also examine which types of tags are most useful for tagsplanations.",
"title": ""
},
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "7252835fc4cc75ed0dd74a6b12da822a",
"text": "Mammalian physiology and behavior are regulated by an internal time-keeping system, referred to as circadian rhythm. The circadian timing system has a hierarchical organization composed of the master clock in the suprachiasmatic nucleus (SCN) and local clocks in extra-SCN brain regions and peripheral organs. The circadian clock molecular mechanism involves a network of transcription-translation feedback loops. In addition to the clinical association between circadian rhythm disruption and mood disorders, recent studies have suggested a molecular link between mood regulation and circadian rhythm. Specifically, genetic deletion of the circadian nuclear receptor Rev-erbα induces mania-like behavior caused by increased midbrain dopaminergic (DAergic) tone at dusk. The association between circadian rhythm and emotion-related behaviors can be applied to pathological conditions, including neurodegenerative diseases. In Parkinson's disease (PD), DAergic neurons in the substantia nigra pars compacta progressively degenerate leading to motor dysfunction. Patients with PD also exhibit non-motor symptoms, including sleep disorder and neuropsychiatric disorders. Thus, it is important to understand the mechanisms that link the molecular circadian clock and brain machinery in the regulation of emotional behaviors and related midbrain DAergic neuronal circuits in healthy and pathological states. This review summarizes the current literature regarding the association between circadian rhythm and mood regulation from a chronobiological perspective, and may provide insight into therapeutic approaches to target psychiatric symptoms in neurodegenerative diseases involving circadian rhythm dysfunction.",
"title": ""
},
{
"docid": "112026af056b3350eceed0c6d0035260",
"text": "This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman filters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and efficiency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.",
"title": ""
},
{
"docid": "4e3f56861c288cca8191a11d2125ede0",
"text": "A top-hat monopole Yagi antenna is presented to produce an end-fire radiation beam. The antenna has an extremely low profile and wide operating bandwidth. It consists of a folded top-hat monopole as the driven element and four short-circuited top-hat monopoles as parasitic elements. A broad bandwidth can be achieved by adjusting the different resonances introduced by the driven and parasitic elements. A prototype operating at the UHF band (f0 = 550 MHz) is fabricated and tested. Measured results show that a fractional bandwidth (|S11| <; -10 dB) of 20.5% is obtained while the antenna height is only λ0/28 at the center frequency.",
"title": ""
},
{
"docid": "70fea2037a5ca55718512c2f2243d387",
"text": "Malicious modification of hardware during design or fabrication has emerged as a major security concern. Such tampering (also referred to as Hardware Trojan) causes an integrated circuit (IC) to have altered functional behavior, potentially with disastrous consequences in safety-critical applications. Conventional design-time verification and post-manufacturing testing cannot be readily extended to detect hardware Trojans due to their stealthy nature, inordinately large number of possible instances and large variety in structure and operating mode. In this paper, we analyze the threat posed by hardware Trojans and the methods of deterring them. We present a Trojan taxonomy, models of Trojan operations and a review of the state-of-the-art Trojan prevention and detection techniques. Next, we discuss the major challenges associated with this security concern and future research needs to address them.",
"title": ""
},
{
"docid": "1db450f3e28907d6940c87d828fc1566",
"text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.",
"title": ""
},
{
"docid": "f1755e987da9d915eb9969e7b1eeb8dc",
"text": "Recent advances in distant-talking ASR research have confirmed that speech enhancement is an essential technique for improving the ASR performance, especially in the multichannel scenario. However, speech enhancement inevitably distorts speech signals, which can cause significant degradation when enhanced signals are used as training data. Thus, distant-talking ASR systems often resort to using the original noisy signals as training data and the enhanced signals only at test time, and give up on taking advantage of enhancement techniques in the training stage. This paper proposes to make use of enhanced features in the student-teacher learning paradigm. The enhanced features are used as input to a teacher network to obtain soft targets, while a student network tries to mimic the teacher network's outputs using the original noisy features as input, so that speech enhancement is implicitly performed within the student network. Compared with conventional student-teacher learning, which uses a better network as teacher, the proposed self-supervised method uses better (enhanced) inputs to a teacher. This setup matches the above scenario of making use of enhanced features in network training. Experiments with the CHiME-4 challenge real dataset show significant ASR improvements with an error reduction rate of 12% in the single-channel track and 15% in the 2-channel track, respectively, by using 6-channel beamformed features for the teacher model.",
"title": ""
},
{
"docid": "cebdedb344f2ba7efb95c2933470e738",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "141ecc1fe0c33bfd647e4d62956f0212",
"text": "a Emerging Markets Research Centre (EMaRC), School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK b Section of Information & Communication Technology, Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands c Nottingham Business School, Nottingham Trent University, UK d School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK e School of Management, Swansea University Bay Campus, Fabian Way, Crymlyn Burrows, Swansea, SA1 8EN, Wales, UK",
"title": ""
},
{
"docid": "dc7262a2e046bd5f633e9f5fbb5f1830",
"text": "We investigate a dual-annular-ring CMUT array configuration for forward-looking intravascular ultrasound (FL-IVUS) imaging. The array consists of separate, concentric transmit and receive ring arrays built on the same silicon substrate. This configuration has the potential for independent optimization of each array and uses the silicon area more effectively without any particular drawback. We designed and fabricated a 1 mm diameter test array which consists of 24 transmit and 32 receive elements. We investigated synthetic phased array beamforming with a non-redundant subset of transmit-receive element pairs of the dual-annular-ring array. For imaging experiments, we designed and constructed a programmable FPGA-based data acquisition and phased array beamforming system. Pulse-echo measurements along with imaging simulations suggest that dual-ring-annular array should provide performance suitable for real-time FL-IVUS applications",
"title": ""
},
{
"docid": "39539ad490065e2a81b6c07dd11643e5",
"text": "Stock prices are formed based on short and/or long-term commercial and trading activities that reflect different frequencies of trading patterns. However, these patterns are often elusive as they are affected by many uncertain political-economic factors in the real world, such as corporate performances, government policies, and even breaking news circulated across markets. Moreover, time series of stock prices are non-stationary and non-linear, making the prediction of future price trends much challenging. To address them, we propose a novel State Frequency Memory (SFM) recurrent network to capture the multi-frequency trading patterns from past market data to make long and short term predictions over time. Inspired by Discrete Fourier Transform (DFT), the SFM decomposes the hidden states of memory cells into multiple frequency components, each of which models a particular frequency of latent trading pattern underlying the fluctuation of stock price. Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion. Modeling multi-frequency trading patterns can enable more accurate predictions for various time ranges: while a short-term prediction usually depends on high frequency trading patterns, a long-term prediction should focus more on the low frequency trading patterns targeting at long-term return. Unfortunately, no existing model explicitly distinguishes between various frequencies of trading patterns to make dynamic predictions in literature. The experiments on the real market data also demonstrate more competitive performance by the SFM as compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "7e846a58cbf49231c41789d1190bce67",
"text": "We study the problem of zero-shot classification in which we don't have labeled data in target domain. Existing approaches learn a model from source domain and apply it without adaptation to target domain, which is prone to domain shift problem. To solve the problem, we propose a novel Learning Discriminative Instance Attribute(LDIA) method. Specifically, we learn a projection matrix for both the source and target domain jointly and also use prototype in the attribute space to regularise the learned projection matrix. Therefore, the information of the source domain can be effectively transferred to the target domain. The experimental results on two benchmark datasets demonstrate that the proposed LDIA method exceeds competitive approaches for zero-shot classification task.",
"title": ""
},
{
"docid": "a2f062482157efb491ca841cc68b7fd3",
"text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.",
"title": ""
},
{
"docid": "1e0eade3cc92eb79160aeac35a3a26d1",
"text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011",
"title": ""
},
{
"docid": "9b575699e010919b334ac3c6bc429264",
"text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we",
"title": ""
}
] |
scidocsrr
|
12a0d321afbdbe6c5dac5f676d9ea587
|
Multi-objective Architecture Search for CNNs
|
[
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
}
] |
[
{
"docid": "0b56f9c9ec0ce1db8dcbfd2830b2536b",
"text": "In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching.",
"title": ""
},
{
"docid": "2ec0db3840965993e857b75bd87a43b7",
"text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.",
"title": ""
},
{
"docid": "3c14ce0d697c69f554a842c1dc997d66",
"text": "We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.",
"title": ""
},
{
"docid": "6ffbb212bec4c90c6b37a9fde3fd0b4c",
"text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.",
"title": ""
},
{
"docid": "cb25c3d33e6a4544ec1e938919566caa",
"text": "Context: Systematic Review (SR) is a methodology used to find and aggregate relevant existing evidence about a specific research topic of interest. It can be very time-consuming depending on the number of gathered studies that need to be analyzed by researchers. One of the relevant tools found in the literature and preliminarily evaluated by researchers of SRs is StArt, which supports the whole SR process. It has been downloaded by users from more than twenty countries. Objective: To present new features available in StArt to support SR activities. Method: Based on users' feedback and the literature, new features were implemented and are available in the tool, like the SCAS strategy, snowballing techniques, the frequency of keywords and a word cloud for search string refining, collaboration among reviewers, and the StArt online community. Results: The new features, according to users' positive feedback, make the tool more robust to support the conduct of SRs. Conclusion: StArt is a tool that has been continuously developed such that new features are often available to improve the support for the SR process. The StArt online community can improve the interaction among users, facilitating the identification of improvements and new useful features.",
"title": ""
},
{
"docid": "50840b0308e1f884b61c9f824b1bf17f",
"text": "The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.",
"title": ""
},
{
"docid": "3ae8865602c53847a0eec298c698a743",
"text": "BACKGROUND\nA low ratio of utilization of healthcare services in postpartum women may contribute to maternal deaths during the postpartum period. The maternal mortality ratio is high in the Philippines. The aim of this study was to examine the current utilization of healthcare services and the effects on the health of women in the Philippines who delivered at home.\n\n\nMETHODS\nThis was a cross-sectional analytical study, based on a self-administrated questionnaire, conducted from March 2015 to February 2016 in Muntinlupa, Philippines. Sixty-three postpartum women who delivered at home or at a facility were enrolled for this study. A questionnaire containing questions regarding characteristics, utilization of healthcare services, and abnormal symptoms during postpartum period was administered. To analyze the questionnaire data, the sample was divided into delivery at home and delivery at a facility. Chi-square test, Fisher's exact test, and Mann-Whitney U test were used.\n\n\nRESULTS\nThere were significant differences in the type of birth attendant, area of residence, monthly income, and maternal and child health book usage between women who delivered at home and those who delivered at a facility (P<0.01). There was significant difference in the utilization of antenatal checkup (P<0.01) during pregnancy, whilst there was no significant difference in utilization of healthcare services during the postpartum period. Women who delivered at home were more likely to experience feeling of irritated eyes and headaches, and continuous abdominal pain (P<0.05).\n\n\nCONCLUSION\nFinancial and environmental barriers might hinder the utilization of healthcare services by women who deliver at home in the Philippines. Low utilization of healthcare services in women who deliver at home might result in more frequent abnormal symptoms during postpartum.",
"title": ""
},
{
"docid": "1e8e4364427d18406594af9ad3a73a28",
"text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.",
"title": ""
},
{
"docid": "5093e3d152d053a9f3322b34096d3e4e",
"text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.",
"title": ""
},
{
"docid": "18b7c2a57ab593810574a6975d6dc72e",
"text": "Explored the factors that influence knowledge and attitudes toward anemia in pregnancy (AIP) in southeastern Nigeria. We surveyed 1500 randomly selected women who delivered babies within 6 months of the survey using a questionnaire. Twelve focus group discussions were held with the grandmothers and fathers of the new babies, respectively. Six in-depth interviews were held with health workers in the study communities. Awareness of AIP was high. Knowledge of its prevention and management was poor with a median score of 10 points on a 50-point scale. Living close to a health facility (p = 0.031), having post-secondary education (p <0.001), being in paid employment (p = 0.017) and being older (p = 0.027) influenced knowledge of AIP. Practices for the prevention and management of AIP were affected by a high level of education (p = 0.034) and having good knowledge of AIP issues (p <0.001). The qualitative data revealed that unorthodox means were employed in response to anemia in pregnancy. This is often delayed until complications set in. Many viewed anemia as a normal phenomenon among pregnant women. AIP awareness is high among the populations. However, management is poor because of poor knowledge of signs and timely appropriate treatment. Prompt and appropriate management of AIP is germane for positive pregnancy outcomes. Anemia-related public education is an urgent need in Southeast Nigeria. Extra consideration of the diverse social development levels of the populations should be taken into account when designing new and improving current prevention and management programs for anemia in pregnancy.",
"title": ""
},
{
"docid": "ec4dcce4f53e38909be438beeb62b1df",
"text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.",
"title": ""
},
{
"docid": "4d5d43c8f8d9bc5753f39e7978b23a0b",
"text": "The future of high-performance computing is likely to rely on the ability to efficiently exploit huge amounts of parallelism. One way of taking advantage of this parallelism is to formulate problems as \"embarrassingly parallel\" Monte-Carlo simulations, which allow applications to achieve a linear speedup over multiple computational nodes, without requiring a super-linear increase in inter-node communication. However, such applications are reliant on a cheap supply of high quality random numbers, particularly for the three main maximum entropy distributions: uniform, used as a general source of randomness; Gaussian, for discrete-time simulations; and exponential, for discrete-event simulations. In this paper we look at four different types of platform: conventional multi-core CPUs (Intel Core2); GPUs (NVidia GTX 200); FPGAs (Xilinx Virtex-5); and Massively Parallel Processor Arrays (Ambric AM2000). For each platform we determine the most appropriate algorithm for generating each type of number, then calculate the peak generation rate and estimated power efficiency for each device.",
"title": ""
},
{
"docid": "ca2cc9e21fd1aacc345238c1d609bedf",
"text": "The aim of the present study was to evaluate the long-term effect of implants installed in different dental areas in adolescents. The sample consisted of 18 subjects with missing teeth (congenital absence or trauma). The patients were of different chronological ages (between 13 and 17 years) and of different skeletal maturation. In all subjects, the existing permanent teeth were fully erupted. In 15 patients, 29 single implants (using the Brånemark technique) were installed to replace premolars, canines, and upper incisors. In three patients with extensive aplasia, 18 implants were placed in various regions. The patients were followed during a 10-year period, the first four years annually and then every second year. Photographs, study casts, peri-apical radiographs, lateral cephalograms, and body height measurements were recorded at each control. The results show that dental implants are a good treatment option for replacing missing teeth in adolescents, provided that the subject's dental and skeletal development is complete. However, different problems are related to the premolar and the incisor regions, which have to be considered in the total treatment planning. Disadvantages may be related to the upper incisor region, especially for lateral incisors, due to slight continuous eruption of adjacent teeth and craniofacial changes post-adolescence. Periodontal problems may arise, with marginal bone loss around the adjacent teeth and bone loss buccally to the implants. The shorter the distance between the implant and the adjacent teeth, the larger the reduction of marginal bone level. Before placement of the implant sufficient space must be gained in the implant area, and the adjacent teeth uprighted and paralleled, even in the apical area, using non-intrusive movements. In the premolar area, excess space is needed, not only in the mesio-distal, but above all in the bucco-lingual direction. Thus, an infraoccluded lower deciduous molar should be extracted shortly before placement of the implant to avoid reduction of the bucco-lingual bone volume. Oral rehabilitation with implant-supported prosthetic constructions seems to be a good alternative in adolescents with extensive aplasia, provided that craniofacial growth has ceased or is almost complete.",
"title": ""
},
{
"docid": "d9df73b22013f7055fe8ff28f3590daa",
"text": "The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.",
"title": ""
},
{
"docid": "2c5e8e4025572925e72e9f51db2b3d95",
"text": "This article reveals our work on refactoring plug-ins for Eclipse's C++ Development Tooling (CDT).\n With CDT a reliable open source IDE exists for C/C++ developers. Unfortunately it has been lacking of overarching refactoring support. There used to be just one single refactoring - Rename. But our plug-in provides several new refactorings which support a C++ developer in his everyday work.",
"title": ""
},
{
"docid": "f9b110890c90d48b6d2f84aa419c1598",
"text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.",
"title": ""
},
{
"docid": "e43cc845368e69ef1278e7109d4d8d6f",
"text": "Estimating six degrees of freedom poses of a planar object from images is an important problem with numerous applications ranging from robotics to augmented reality. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on target objects with rich texture. In this work, we propose a two-step robust direct method for six-dimensional pose estimation that performs accurately on both textured and textureless planar target objects. First, the pose of a planar target object with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Second, each object pose is refined and disambiguated using a dense alignment scheme. Extensive experiments on both synthetic and real datasets demonstrate that the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under varying conditions. Furthermore, we show that the proposed dense alignment scheme can also be used for accurate pose tracking in video sequences.",
"title": ""
},
{
"docid": "27f1f3791b7a381f92833d4983620b7e",
"text": "Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.",
"title": ""
},
{
"docid": "a29d666fe1135bb60a75f1cecf85e31c",
"text": "Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, the state-of-the-art systems for approximate computing primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. This motivated the design of StreamApprox— a stream analytics system for approximate computing. To realize this idea, we designed an online stratified reservoir sampling algorithm to produce approximate outputwith rigorous error bounds. Importantly, our proposed algorithm is generic and can be applied to two prominent types of stream processing systems: (1) batched stream processing such asApache Spark Streaming, and (2) pipelined stream processing such as Apache Flink. To showcase the effectiveness of our algorithm,we implemented StreamApprox as a fully functional prototype based on Apache Spark Streaming and Apache Flink. We evaluated StreamApprox using a set of microbenchmarks and real-world case studies. Our results show that Sparkand Flink-based StreamApprox systems achieve a speedup of 1.15×—3× compared to the respective native Spark Streaming and Flink executions, with varying sampling fraction of 80% to 10%. Furthermore, we have also implemented an improved baseline in addition to the native execution baseline — a Spark-based approximate computing system leveraging the existing sampling modules in Apache Spark. Compared to the improved baseline, our results show that StreamApprox achieves a speedup 1.1×—2.4× while maintaining the same accuracy level. This technical report is an extended version of our conference publication [39].",
"title": ""
},
{
"docid": "9bc1d596de6471e23bd678febe7d962d",
"text": "Identifying paraphrase in Malayalam language is difficult task because it is a highly agglutinative language and the linguistic structure in Malayalam language is complex compared to other languages. Here we use individual words synonyms to find the similarity between two sentences. In this paper, cosine similarity method is used to find the paraphrases in Malayalam language. In this paper we present the observations on sentence similarity between two Malayalam sentences using cosine similarity method, we used test data of 900 and 1400 sentence pairs of FIRE 2016 Malayalam corpus that used in two iterations to present and obtained an accuracy of 0.8 and 0.59.",
"title": ""
}
] |
scidocsrr
|
b2507ae1279099755dca255a3b9efc76
|
Large Margin Neural Language Models
|
[
{
"docid": "755f7e93dbe43a0ed12eb90b1d320cb2",
"text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).",
"title": ""
}
] |
[
{
"docid": "01ebd4b68fb94fc5defaff25c2d294b0",
"text": "High data rate E-band (71 GHz- 76 GHz, 81 GHz - 86 GHz, 92 GHz - 95 GHz) communication systems will benefit from power amplifiers that are more than twice as powerful than commercially available GaAs pHEMT MMICs. We report development of three stage GaN MMIC power amplifiers for E-band radio applications that produce 500 mW of saturated output power in CW mode and have > 12 dB of associated power gain. The output power density from 300 mum output gate width GaN MMICs is seven times higher than the power density of commercially available GaAs pHEMT MMICs in this frequency range.",
"title": ""
},
{
"docid": "b3fb796dc943121e4a8114f8ba5e8d97",
"text": "HyperLogLog Counting is widely used in cardinality estimation. It is the foundation of many algorithms in data analysis, commodity recommendation and database optimization. Facing the large scale internet business like electronic commerce, internet companies have an urgent requirement of distributed real-time cardinality estimation with high accuracy and low time cost. In this paper, we propose a distributed real-time cardinality estimation algorithm named Hermes. Hermes adjusts the estimated cardinality dynamically according to the result of HyperLogLog Counting and also optimizes the data distribution strategy of existing distributed cardinality estimation algorithms. Experiments have been carried out and the results show that Hermes has lower estimation error and time cost compared with existing algorithms.",
"title": ""
},
{
"docid": "cd0c68845416f111307ae7e14bfb7491",
"text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.",
"title": ""
},
{
"docid": "8cfe884fdc795b8af361c538ab7795ea",
"text": "It sounds good when knowing the the mechanics of earthquakes and faulting 2nd edition in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.",
"title": ""
},
{
"docid": "abaa0bb3d5e60dce3bee4cffad64bc7c",
"text": "We argue that the novel combination of type classes and existential types in a single language yields significant expressive power. We explore this combination in the context of higher-order functional languages with static typing, parametric polymorphism, algebraic data types, and Hindley-Milner type inference. Adding existential types to an existing functional language that already features type classes requires only a minor syntactic extension. We first demonstrate how to provide existential quantification over type classes by extending the syntax of algebraic data type definitions and give examples of possible uses. We then develop a type system and a type inference algorithm for the resulting language. Finally, we present a formal semantics by translation to an implicitly-typed second-order λ-calculus and show that the type system is semantically sound. Our extension has been implemented in the Chalmers Haskell B. system, and all examples from this paper have been developed using this system.",
"title": ""
},
{
"docid": "fed9defe1a4705390d72661f96b38519",
"text": "Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bézout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.",
"title": ""
},
{
"docid": "e1366b0128c4d76addd57bb2b02a19b5",
"text": "OBJECTIVE\nThe present study examined the association between child sexual abuse (CSA) and sexual health outcomes in young adult women. Maladaptive coping strategies and optimism were investigated as possible mediators and moderators of this relationship.\n\n\nMETHOD\nData regarding sexual abuse, coping, optimism and various sexual health outcomes were collected using self-report and computerized questionnaires with a sample of 889 young adult women from the province of Quebec aged 20-23 years old.\n\n\nRESULTS\nA total of 31% of adult women reported a history of CSA. Women reporting a severe CSA were more likely to report more adverse sexual health outcomes including suffering from sexual problems and engaging in more high-risk sexual behaviors. CSA survivors involving touching only were at greater risk of reporting more negative sexual self-concept such as experiencing negative feelings during sex than were non-abused participants. Results indicated that emotion-oriented coping mediated outcomes related to negative sexual self-concept while optimism mediated outcomes related to both, negative sexual self-concept and high-risk sexual behaviors. No support was found for any of the proposed moderation models.\n\n\nCONCLUSIONS\nSurvivors of more severe CSA are more likely to engage in high-risk sexual behaviors that are potentially harmful to their health as well as to experience more sexual problems than women without a history of sexual victimization. Personal factors, namely emotion-oriented coping and optimism, mediated some sexual health outcomes in sexually abused women. The results suggest that maladaptive coping strategies and optimism regarding the future may be important targets for interventions optimizing sexual health and sexual well-being in CSA survivors.",
"title": ""
},
{
"docid": "228308cc4358b1723161ca8ae70e344c",
"text": "The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.",
"title": ""
},
{
"docid": "db01e0c7c959e2f279afc5d78240ffca",
"text": "The implementation of an enterprise-wide Service Oriented Architecture (SOA) is a complex task. In most cases, evolutional approaches are used to handle this complexity. Maturity models are a possibility to plan and control such an evolution as they allow evaluating the current maturity and identifying current shortcomings. In order to support an SOA implementation, maturity models should also support in the selection of the most adequate maturity level and the deduction of a roadmap to this level. Existing SOA maturity models provide only weak assistance with the selection of an adequate maturity level. Most of them are developed by vendors of SOA products and often used to promote their products. In this paper, we introduce our independent SOA Maturity Model (iSOAMM), which is independent of the used technologies and products. In addition to the impacts on IT systems, it reflects the implications on organizational structures and governance. Furthermore, the iSOAMM lists the challenges, benefits and risks associated with each maturity level. This enables enterprises to select the most adequate maturity level for them, which is not necessarily the highest one.",
"title": ""
},
{
"docid": "01457bfad1b14fdf3702a0ff798faf9e",
"text": "Jubjitt P, Tingsabhat J, Chaiwatcharaporn C. New PositionSpecific Movement Ability Test (PoSMAT) Protocol Suite and Norms for Talent Identification, Selection, and Personalized Training for Soccer Players. JEPonline 2017;20(1):59-82. The purpose of this study was to develop a soccer position-specific movement ability test (PoSMAT) Protocol Suite and establish their norms. Subjects consisted of six different position soccer players per team from six Thai Premier League 2013 teams. The first step was to identify position-specific high speed running/sprint speed with corresponding distances covered by TRAK PERFORMANCE software. The second step was to develop the PoSMAT Protocol Suite by incorporating position-specific movement patterns and speed-distance analyses from the first step into three test protocols for ATTK Attacker, CMCD Central Midfielder and Central Defender, and WMFB Wide Midfielder and Full Back with respect to the soccer players’ abilities in speed, agility, and cardiovascular endurance. The findings indicate that the PoSMAT Protocol Suite was statistically valid, objective, reliable, and discriminating. Also, PoSMAT norms from 360 Thai elite soccer players were established. Thus, the PoSMAT Protocol Suite and norms can be used for position-specific talent identification, selection for proper playing position placement, and individualized training to enhance the players’ soccer career.",
"title": ""
},
{
"docid": "24339633dd4292d41bb4b9493322c521",
"text": "In this paper we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both soft, differentiable and hard, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of the Facebook bAbI tasks and shown to outperform NTM and LSTM baselines.",
"title": ""
},
{
"docid": "1f3f352c7584fb6ec1924ca3621fb1fb",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "2bbcdf5f3182262d3fcd6addc1e3f835",
"text": "Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.50 and 96.58 percent, respectively, which are significantly better than the best result reported thus far in the literature.",
"title": ""
},
{
"docid": "4168faccc342463efff783608daa2190",
"text": "Making machines more intelligent can potentially make human life easier. A lot of research has gone into the field of artificial intelligence (AI) since the creation of first computers. However, today’s systems still lag behind humans’ general ability to think and learn. Reinforcement Learning (RL) is a framework where software agents learn by interaction with an environment. We investigate possibilities to use observations about human intelligence to make RL agents smarter. In particular, we tried several methods: 1) To use “Tagger” an unsupervised deep learning framework for perceptual grouping, to learn more usable abstract relationships between objects. 2) Make one RL algorithm (A3C) more data efficient to learn faster. 3) To conduct these experiments, we built a web based RL dashboard based on visualization tool visdom. Finally, we provide some concrete challenges to work on in the future.",
"title": ""
},
{
"docid": "a3157e48d7ef6ba77a66d0e1a5025efe",
"text": "Pattern recognition and Gesture recognition are the growing fields of research. Being a significant part in non verbal communication hand gestures are playing vital role in our daily life. Hand Gesture recognition system provides us an innovative, natural, user friendly way of interaction with the computer which is more familiar to the human beings. Gesture Recognition has a wide area of application including human machine interaction, sign language, immersive game technology etc. By keeping in mind the similarities of human hand shape with four fingers and one thumb, this paper aims to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers, thumb in terms of raised or folded fingers of hand and their respective location in image. The approach introduced in this paper is totally depending on the shape parameters of the hand gesture. It does not consider any other mean of hand gesture recognition like skin color, texture because these image based features are extremely variant to different light conditions and other influences. To implement this approach we have utilized a simple web cam which is working on 20 fps with 7 mega pixel intensity. On having the input sequence of images through web cam it uses some pre-processing steps for removal of background noise and employs K-means clustering for segmenting the hand object from rest of the background, so that only segmented significant cluster or hand object is to be processed in order to calculate shape based features. This simple shape based approach to hand gesture recognition can identify around 45 different gestures on the bases of 5 bit binary string resulted as the output of this algorithm. This proposed implemented algorithm has been tested over 450 images and it gives approximate recognition rate of 94%.",
"title": ""
},
{
"docid": "3e5005ac4d28ce7e1360bb012199ce42",
"text": "Relying on social connections, online recommendation engines and other enabling technologies, consumers have constantly been increasing expectations and seek experiential value in online shopping. Since customers have more places and ways to shop than ever before, retailers – in order to be successful – must find ways to make online shopping pleasant and enjoyable. They have begun to enhance the online customer experience by incorporating game elements into their business processes, making online shopping not just attractive with innovative products and low prices, but also fun. This concept is known as gamification – a trending topic in both academia and business – and generally defined as the use of game thinking and elements in non-game contexts. In our study, we used a state-of-the-art framework (Octalysis) to analyze a sample of retailers from different industries operating on the European market. Based on an octagonal shape, Octalysis comprises 8 core drives that seek to explain the influence of certain gamification techniques on consumer motivation. Our study focused on determining (a) each retailer’s position in the octagon and (b) whether retailers in the same sector target the same core drives. Further, we suggest guidelines for academics and practitioners seeking to convert results into more and better ideas for online shopping.",
"title": ""
},
{
"docid": "b0ea0b7e3900b440cb4e1d5162c6830b",
"text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.",
"title": ""
},
{
"docid": "1cf458cdec768fc802fb0d5754cdfac2",
"text": "The pervasive adoption of traditional information and communication technologies hardware and software in industrial control systems (ICS) has given birth to a unique technological ecosystem encapsulating a variety of objects ranging from sensors and actuators to video surveillance cameras and generic PCs. Despite their invaluable advantages, these advanced ICS create new design challenges, which expose them to significant cyber threats. To address these challenges, an innovative ICS network design technique is proposed in this paper to harmonize the traditional ICS design requirements pertaining to strong architectural determinism and real-time data transfer with security recommendations outlined in the ISA-62443.03.02 standard. The proposed technique accommodates security requirements by partitioning the network into security zones and by provisioning critical communication channels, known as security conduits, between two or more security zones. The ICS network design is formulated as an integer linear programming (ILP) problem that minimizes the cost of the installation. Real-time data transfer limitations and security requirements are included as constraints imposing the selection of specific traffic paths, the selection of routing nodes, and the provisioning of security zones and conduits. The security requirements of cyber assets denoted by traffic and communication endpoints are determined by a cyber attack impact assessment technique proposed in this paper. The sensitivity of the proposed techniques to different parameters is evaluated in a first scenario involving the IEEE 14-bus model and in a second scenario involving a large network topology based on generated data. Experimental results demonstrate the efficiency and scalability of the ILP model.",
"title": ""
},
{
"docid": "a0547eae9a2186d4c6f1b8307317f061",
"text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
1ed35fbb013db589de213adafa3f5917
|
Self-Supervised Neural Aggregation Networks for Human Parsing
|
[
{
"docid": "a8c1224f291df5aeb655a2883b16bcfb",
"text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"title": ""
}
] |
[
{
"docid": "cc52bb9210f400a42b0b8374dde374ab",
"text": "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.",
"title": ""
},
{
"docid": "6c730f32b02ca58f66e98f9fc5181484",
"text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.",
"title": ""
},
{
"docid": "dd4edd271de8483fc3ce25f16763ffd1",
"text": "Computer vision is a rapidly evolving discipline. It includes methods for acquiring, processing, and understanding still images and video to model, replicate, and sometimes, exceed human vision and perform useful tasks.\n Computer vision will be commonly used for a broad range of services in upcoming devices, and implemented in everything from movies, smartphones, cameras, drones and more. Demand for CV is driving the evolution of image sensors, mobile processors, operating systems, application software, and device form factors in order to meet the needs of upcoming applications and services that benefit from computer vision. The resulting impetus means rapid advancements in:\n • visual computing performance\n • object recognition effectiveness\n • speed and responsiveness\n • power efficiency\n • video image quality improvement\n • real-time 3D reconstruction\n • pre-scanning for movie animation\n • image stabilization\n • immersive experiences\n • and more...\n Comprised of innovation leaders of computer vision, this panel will cover recent developments, as well as how CV will be enabled and used in 2016 and beyond.",
"title": ""
},
{
"docid": "913b84b1afc5f34eb107d9717529bf53",
"text": "With the rapid development of the peer-to-peer lending industry in China, it has been a crucial task to evaluate the default risk of each loan. Motivated by the research in natural language processing, we make use of the online operation behavior data of borrowers and propose a consumer credit scoring method based on attention mechanism LSTM, which is a novel application of deep learning algorithm. Inspired by the idea of Word2vec, we treat each type of event as a word, construct the Event2vec model to convert each type of event transformation into a vector and, then, use an attention mechanism LSTM network to predict the probability of user default. The method is evaluated on the real dataset, and the results show that the proposed solution can effectively increase the predictive accuracy compared with the traditional artificial feature extraction method and the standard LSTM model.",
"title": ""
},
{
"docid": "8d432d8fd4a6d0f368a608ebca5d67d7",
"text": "The origin and continuation of mankind is based on water. Water is one of the most abundant resources on earth, covering three-fourths of the planet’s surface. However, about 97% of the earth’s water is salt water in the oceans, and a tiny 3% is fresh water. This small percentage of the earth’s water—which supplies most of human and animal needs—exists in ground water, lakes and rivers. The only nearly inexhaustible sources of water are the oceans, which, however, are of high salinity. It would be feasible to address the water-shortage problem with seawater desalination; however, the separation of salts from seawater requires large amounts of energy which, when produced from fossil fuels, can cause harm to the environment. Therefore, there is a need to employ environmentally-friendly energy sources in order to desalinate seawater. After a historical introduction into desalination, this paper covers a large variety of systems used to convert seawater into fresh water suitable for human use. It also covers a variety of systems, which can be used to harness renewable energy sources; these include solar collectors, photovoltaics, solar ponds and geothermal energy. Both direct and indirect collection systems are included. The representative example of direct collection systems is the solar still. Indirect collection systems employ two subsystems; one for the collection of renewable energy and one for desalination. For this purpose, standard renewable energy and desalination systems are most often employed. Only industrially-tested desalination systems are included in this paper and they comprise the phase change processes, which include the multistage flash, multiple effect boiling and vapour compression and membrane processes, which include reverse osmosis and electrodialysis. The paper also includes a review of various systems that use renewable energy sources for desalination. Finally, some general guidelines are given for selection of desalination and renewable energy systems and the parameters that need to be considered. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e75735444e837fb888f24ad204555440",
"text": "In the recent years, people need to use Internet at anytime and anywhere. Internet of Things (IOT) allows people and things to be connected Anytime, Anyplace, with Anything and Anyone, ideally using Any path/network and Any service. IOT can be distinguished by various technologies, which provide the creative services in different application domains. This implies that there are various challenges present while deploying IOT. The traditional security services are not directly applied on IOT due to different communication stacks and various standards. So flexible security mechanisms are need to be invented, which deal with the security threats in such dynamic environment of IOT. In this survey we present the various research challenges with their respective solutions. Also, some open issues are discovered and some hints for further research direction are advocated. Keywords— Internet-of-Things; Sensor Networks; Smart objects; Sensors; Actuators; ubiquitous; Security",
"title": ""
},
{
"docid": "c1d5f28d264756303fded5faa65587a2",
"text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93d4e6aba0ef5c17bb751ff93f0d3848",
"text": "In this work we propose a new SIW structure, called the corrugated SIW (CSIW), which does not require conducting vias to achieve TE10 type boundary conditions at the side walls. Instead, the vias are replaced by quarter wavelength microstrip stubs arranged in a corrugated pattern on the edges of the waveguide. This, along with series interdigitated capacitors, results in a waveguide section comprising two separate conductors, which facilitates shunt connection of active components such as Gunn diodes.",
"title": ""
},
{
"docid": "a64ef7969005d186e004c0d9d340567c",
"text": "The Mirai botnet and its variants and imitators are a wake-up call to the industry to better secure Internet of Things devices or risk exposing the Internet infrastructure to increasingly disruptive distributed denial-of-service attacks.",
"title": ""
},
{
"docid": "3d7e5b1c887ab15c936e3a0ea96e9bf4",
"text": "Most previous studies on visual saliency have only focused on static or dynamic 2D scenes. Since the human visual system has evolved predominantly in natural three dimensional environments, it is important to study whether and how depth information influences visual saliency. In this work, we first collect a large human eye fixation database compiled from a pool of 600 2D-vs-3D image pairs viewed by 80 subjects, where the depth information is directly provided by the Kinect camera and the eye tracking data are captured in both 2D and 3D free-viewing experiments. We then analyze the major discrepancies between 2D and 3D human fixation data of the same scenes, which are further abstracted and modeled as novel depth priors. Finally, we evaluate the performances of state-of-the-art saliency detection models over 3D images, and propose solutions to enhance their performances by integrating the depth priors.",
"title": ""
},
{
"docid": "01c8154880e0cb8daf1107038fb4bc41",
"text": "In a new approach to large-scale extraction of facts from unstructured text, distributional similarities become an integral part of both the iterative acquisition of high-coverage contextual extraction patterns, and the validation and ranking of candidate facts. The evaluation measures the quality and coverage of facts extracted from one hundred million Web documents, starting from ten seed facts and using no additional knowledge, lexicons or complex tools.",
"title": ""
},
{
"docid": "9e3562c5d4baf6be3293486383e62b3e",
"text": "Many philosophical and contemplative traditions teach that \"living in the moment\" increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default-mode network (medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self-monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.",
"title": ""
},
{
"docid": "7918167cbceddcc24b4d22f094b167dd",
"text": "This paper is presented the study of the social influence by using social features in fitness mobile applications and habit that persuades the working-aged people, in the context of continuous fitness mobile application usage to promote the physical activity. Our conceptual model consisted of Habit and Social Influence. The social features based on the Persuasive Technology (1) Normative Influence, (2) Social Comparison, (3) Competition, (4) Co-operation, and (5) Social Recognition were embedded in the Social Influence construct of UTAUT2 model. The questionnaires were an instrument for this study. The target group was 443 working-aged people who live in Thailand's central region. The results reveal that the factors significantly affecting Behavioral Intention toward Use Behavior are Normative Influence, Social Comparison, Competition, and Co-operation. Only the Social Recognition is insignificantly affecting Behavioral Intention to use fitness mobile applications. The Behavioral Intention and Habit also significantly support the Use Behavior. The social features in fitness mobile application should be developed to promote the physical activity.",
"title": ""
},
{
"docid": "62a8548527371acb657d9552ab41d699",
"text": "This paper proposes a novel dynamic gait of locomotion for hexapedal robots which enables them to crawl forward, backward, and rotate using a single actuator. The gait exploits the compliance difference between the two sides of the tripods, to generate clockwise or counter clockwise rotation by controlling the acceleration of the robot. The direction of turning depends on the configuration of the legs -tripod left of right- and the direction of the acceleration. Alternating acceleration in successive steps allows for continuous rotation in the desired direction. An analysis of the locomotion is presented as a function of the mechanical properties of the robot and the contact with the surface. A numerical simulation was performed for various conditions of locomotion. The results of the simulation and analysis were compared and found to be in excellent match.",
"title": ""
},
{
"docid": "21139973d721956c2f30e07ed1ccf404",
"text": "Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.",
"title": ""
},
{
"docid": "cd64fdc5cee4d603e6e7335e8d9c4956",
"text": "An integrated triple-band GSM antenna switch module, fabricated in RF CMOS on a sapphire substrate, is presented in this paper. The low cost and compact size requirements in wireless and mobile communication systems motivate the continuing integration of the analog portions of the design. The antenna switch die incorporates a FET switch, transmit path filters, and all bias and control circuitry on the same substrate using a 0.5 /spl mu/m CMOS process. A revised version of the die is also proposed, which makes use of an additional copper interconnect layer to reduce die area.",
"title": ""
},
{
"docid": "05703b87121e50d71654254342b97f9d",
"text": "\"Telepresence\" is an interesting field that includes virtual reality implementations with human-system interfaces, communication technologies, and robotics. This paper describes the development of a telepresence robot called Telepresence Robot for Interpersonal Communication (TRIC) for the purpose of interpersonal communication with the elderly in a home environment. The main aim behind TRIC's development is to allow elderly populations to remain in their home environments, while loved ones and caregivers are able to maintain a higher level of communication and monitoring than via traditional methods. TRIC aims to be a low-cost, lightweight robot, which can be easily implemented in the home environment. Under this goal, decisions on the design elements included are discussed. In particular, the implementation of key autonomous behaviors in TRIC to increase the user's capability of projection of self and operation of the telepresence robot, in addition to increasing the interactive capability of the participant as a dialogist are emphasized. The technical development and integration of the modules in TRIC, as well as human factors considerations are then described. Preliminary functional tests show that new users were able to effectively navigate TRIC and easily locate visual targets. Finally the future developments of TRIC, especially the possibility of using TRIC for home tele-health monitoring and tele-homecare visits are discussed.",
"title": ""
}
] |
scidocsrr
|
c60fb0a942c51ee8af163e87d5cd7965
|
"Breaking" Disasters: Predicting and Characterizing the Global News Value of Natural and Man-made Disasters
|
[
{
"docid": "2116414a3e7996d4701b9003a6ccfd15",
"text": "Informal genres such as tweets provide large quantities of data in real time, which can be exploited to obtain, through ranking and classification, a succinct summary of the events that occurred. Previous work on tweet ranking and classification mainly focused on salience and social network features or rely on web documents such as online news articles. In this paper, we exploit language independent journalism and content based features to identify news from tweets. We propose a novel newsworthiness classifier trained through active learning and investigate human assessment and automatic methods to encode it on both the tweet and trending topic levels. Our findings show that content and journalism based features proved to be effective for ranking and classifying content on Twitter.",
"title": ""
},
{
"docid": "1274ab286b1e3c5701ebb73adc77109f",
"text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.",
"title": ""
}
] |
[
{
"docid": "e9a66ce7077baf347d325bca7b008d6b",
"text": "Recent research have shown that the Wavelet Transform (WT) can potentially be used to extract Partial Discharge (PD) signals from severe noise like White noise, Random noise and Discrete Spectral Interferences (DSI). It is important to define that noise is a significant problem in PD detection. Accordingly, the paper mainly deals with denoising of PD signals, based on improved WT techniques namely Translation Invariant Wavelet Transform (TIWT). The improved WT method is distinct from other traditional method called as Fast Fourier Transform (FFT). The TIWT not only remain the edge of the original signal efficiently but also reduce impulsive noise to some extent. Additionally Translation Invariant (TI) Wavelet Transform denoising is used to suppress Pseudo Gibbs phenomenon. In this paper an attempt has been made to review the methodology of denoising the partial discharge signals and shows that the proposed denoising method results are better when compared to other wavelet-based approaches like FFT, wavelet hard thresholding, wavelet soft thresholding, by evaluating five different parameters like, Signal to noise ratio, Cross correlation coefficient, Pulse amplitude distortion, Mean square error, Reduction in noise level.",
"title": ""
},
{
"docid": "bacb761bc173a07bf13558e2e5419c2b",
"text": "Rejection sensitivity is the disposition to anxiously expect, readily perceive, and intensely react to rejection. In response to perceived social exclusion, highly rejection sensitive people react with increased hostile feelings toward others and are more likely to show reactive aggression than less rejection sensitive people in the same situation. This paper summarizes work on rejection sensitivity that has provided evidence for the link between anxious expectations of rejection and hostility after rejection. We review evidence that rejection sensitivity functions as a defensive motivational system. Thus, we link rejection sensitivity to attentional and perceptual processes that underlie the processing of social information. A range of experimental and diary studies shows that perceiving rejection triggers hostility and aggressive behavior in rejection sensitive people. We review studies that show that this hostility and reactive aggression can perpetuate a vicious cycle by eliciting rejection from those who rejection sensitive people value most. Finally, we summarize recent work suggesting that this cycle can be interrupted with generalized self-regulatory skills and the experience of positive, supportive relationships.",
"title": ""
},
{
"docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd",
"text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "8c34f43e7d3f760173257fbbc58c22ca",
"text": "High voltage pulse generators can be used effectively in water treatment applications, as applying a pulsed electric field on the infected sample guarantees killing of harmful germs and bacteria. In this paper, a new high voltage pulse generator with closed loop control on its output voltage is proposed. The proposed generator is based on DC-to-DC boost converter in conjunction with capacitor-diode voltage multiplier (CDVM), and can be fed from low-voltage low-frequency AC supply, i.e. utility mains. The proposed topology provides transformer-less operation which reduces size and enhances the overall efficiency. A Detailed design of the proposed pulse generator has been presented as well. The proposed approach is validated by simulation as well as experimental results.",
"title": ""
},
{
"docid": "9b2291ef3e605d85b6d0dba326aa10ef",
"text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.",
"title": ""
},
{
"docid": "a57b2e8b24cced6f8bfad942dd530499",
"text": "With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types posses a serious problem for their detection. The human labelling of the available network audit data instances is usually tedious, time consuming and expensive. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.",
"title": ""
},
{
"docid": "72c0cef98023dd5b6c78e9c347798545",
"text": "Several works have shown that Convolutional Neural Networks (CNNs) can be easily adapted to different datasets and tasks. However, for extracting the deep features from these pre-trained deep CNNs a fixedsize (e.g., 227×227) input image is mandatory. Now the state-of-the-art datasets like MIT-67 and SUN-397 come with images of different sizes. Usage of CNNs for these datasets enforces the user to bring different sized images to a fixed size either by reducing or enlarging the images. The curiosity is obvious that “Isn’t the conversion to fixed size image is lossy ?”. In this work, we provide a mechanism to keep these lossy fixed size images aloof and process the images in its original form to get set of varying size deep feature maps, hence being lossless. We also propose deep spatial pyramid match kernel (DSPMK) which amalgamates set of varying size deep feature maps and computes a matching score between the samples. Proposed DSPMK act as a dynamic kernel in the classification framework of scene dataset using support vector machine. We demonstrated the effectiveness of combining the power of varying size CNN-based set of deep feature maps with dynamic kernel by achieving state-of-the-art results for high-level visual recognition tasks such as scene classification on standard datasets like MIT67 and SUN397.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "5c0f2bcde310b7b76ed2ca282fde9276",
"text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.",
"title": ""
},
{
"docid": "c8305675ba4bb16f26abf820db4b8a38",
"text": "Microbes are dominant drivers of biogeochemical processes, yet drawing a global picture of functional diversity, microbial community structure, and their ecological determinants remains a grand challenge. We analyzed 7.2 terabases of metagenomic data from 243 Tara Oceans samples from 68 locations in epipelagic and mesopelagic waters across the globe to generate an ocean microbial reference gene catalog with >40 million nonredundant, mostly novel sequences from viruses, prokaryotes, and picoeukaryotes. Using 139 prokaryote-enriched samples, containing >35,000 species, we show vertical stratification with epipelagic community composition mostly driven by temperature rather than other environmental factors or geography. We identify ocean microbial core functionality and reveal that >73% of its abundance is shared with the human gut microbiome despite the physicochemical differences between these two ecosystems.",
"title": ""
},
{
"docid": "29236d00bde843ff06e0f1a3e0ab88e4",
"text": "■ The advent of the modern cruise missile, with reduced radar observables and the capability to fly at low altitudes with accurate navigation, placed an enormous burden on all defense weapon systems. Every element of the engagement process, referred to as the kill chain, from detection to target kill assessment, was affected. While the United States held the low-observabletechnology advantage in the late 1970s, that early lead was quickly challenged by advancements in foreign technology and proliferation of cruise missiles to unfriendly nations. Lincoln Laboratory’s response to the various offense/defense trade-offs has taken the form of two programs, the Air Vehicle Survivability Evaluation program and the Radar Surveillance Technology program. The radar developments produced by these two programs, which became national assets with many notable firsts, is the subject of this article.",
"title": ""
},
{
"docid": "5cdb981566dfd741c9211902c0c59d50",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "7167964274b05da06beddb1aef119b2c",
"text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.",
"title": ""
},
{
"docid": "71576ab1edd5eadbda1f34baba91b687",
"text": "Visualization can make a wide range of mobile applications more intuitive and productive. The mobility context and technical limitations such as small screen size make it impossible to simply port visualization applications from desktop computers to mobile devices, but researchers are starting to address these challenges. From a purely technical point of view, building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs such as OpenGLES and increasingly powerful devices. Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches.",
"title": ""
},
{
"docid": "1e8f25674dc66a298c277d80dd031c20",
"text": "DeepQ Arrhythmia Database, the first generally available large-scale dataset for arrhythmia detector evaluation, contains 897 annotated single-lead ECG recordings from 299 unique patients. DeepQ includes beat-by-beat, rhythm episodes, and heartbeats fiducial points annotations. Each patient was engaged in a sequence of lying down, sitting, and walking activities during the ECG measurement and contributed three five-minute records to the database. Annotations were manually labeled by a group of certified cardiographic technicians and audited by a cardiologist at Taipei Veteran General Hospital, Taiwan. The aim of this database is in three folds. First, from the scale perspective, we build this database to be the largest representative reference set with greater number of unique patients and more variety of arrhythmic heartbeats. Second, from the diversity perspective, our database contains fully annotated ECG measures from three different activity modes and facilitates the arrhythmia classifier training for wearable ECG patches and AAMI assessment. Thirdly, from the quality point of view, it serves as a complement to the MIT-BIH Arrhythmia Database in the development and evaluation of the arrhythmia detector. The addition of this dataset can help facilitate the exhaustive studies using machine learning models and deep neural networks, and address the inter-patient variability. Further, we describe the development and annotation procedure of this database, as well as our on-going enhancement. We plan to make DeepQ database publicly available to advance medical research in developing outpatient, mobile arrhythmia detectors.",
"title": ""
},
{
"docid": "844116dc8302aac5076c95ac2218b5bd",
"text": "Virtual reality and augmented reality technology has existed in various forms for over two decades. However, high cost proved to be one of the main barriers to its adoption in education, outside of experimental studies. The creation and widespread sale of low-cost virtual reality devices using smart phones has made virtual reality technology available to the common person. This paper reviews how virtual reality and augmented reality has been used in education, discusses the advantages and disadvantages of using these technologies in the classroom, and describes how virtual reality and augmented reality technologies can be used to enhance teaching at the United States Military Academy.",
"title": ""
},
{
"docid": "243391e804c06f8a53af906b31d4b99a",
"text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.",
"title": ""
},
{
"docid": "9c8648843bfc33f6c66845cd63df94d0",
"text": "BACKGROUND\nThe safety and short-term benefits of laparoscopic colectomy for cancer remain debatable. The multicentre COLOR (COlon cancer Laparoscopic or Open Resection) trial was done to assess the safety and benefit of laparoscopic resection compared with open resection for curative treatment of patients with cancer of the right or left colon.\n\n\nMETHODS\n627 patients were randomly assigned to laparoscopic surgery and 621 patients to open surgery. The primary endpoint was cancer-free survival 3 years after surgery. Secondary outcomes were short-term morbidity and mortality, number of positive resection margins, local recurrence, port-site or wound-site recurrence, metastasis, overall survival, and blood loss during surgery. Analysis was by intention to treat. Here, clinical characteristics, operative findings, and postoperative outcome are reported.\n\n\nFINDINGS\nPatients assigned laparoscopic resection had less blood loss compared with those assigned open resection (median 100 mL [range 0-2700] vs 175 mL [0-2000], p<0.0001), although laparoscopic surgery lasted 30 min longer than did open surgery (p<0.0001). Conversion to open surgery was needed for 91 (17%) patients undergoing the laparoscopic procedure. Radicality of resection as assessed by number of removed lymph nodes and length of resected oral and aboral bowel did not differ between groups. Laparoscopic colectomy was associated with earlier recovery of bowel function (p<0.0001), need for fewer analgesics, and with a shorter hospital stay (p<0.0001) compared with open colectomy. Morbidity and mortality 28 days after colectomy did not differ between groups.\n\n\nINTERPRETATION\nLaparoscopic surgery can be used for safe and radical resection of cancer in the right, left, and sigmoid colon.",
"title": ""
}
] |
scidocsrr
|
31c0c7d30d38abd5a1719505df584dc3
|
SEC-TOE Framework: Exploring Security Determinants in Big Data Solutions Adoption
|
[
{
"docid": "03d5c8627ec09e4332edfa6842b6fe44",
"text": "In the same way businesses use big data to pursue profits, governments use it to promote the public good.",
"title": ""
}
] |
[
{
"docid": "022460b5f9cd5460f4213794455dedd0",
"text": "The meniscus was once considered a functionless remnant of muscle that should be removed in its entirety at any sign of abnormality. Its role in load distribution, knee stability, and arthritis prevention has since been well established. The medial and lateral menisci are now considered vital structures in the healthy knee. Advancements in surgical techniques and biologic augmentation methods have expanded the indications for meniscal repair, with documented healing in tears previously deemed unsalvageable. In this article, we review the anatomy and function of the meniscus, evaluate the implications of meniscectomy, and assess the techniques of, and outcomes following, meniscal repair.",
"title": ""
},
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "1afc103a3878d859ec15929433f49077",
"text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.",
"title": ""
},
{
"docid": "81aa85ced7f0d83e28b0a2616bce6aae",
"text": "Delaunay refinement is a technique for generating unstructured meshes of triangles for use in interpolation, the finite element method, and the finite volume method. In theory and practice, meshes produced by Delaunay refinement satisfy guaranteed bounds on angles, edge lengths, the number of triangles, and the grading of triangles from small to large sizes. This article presents an intuitive framework for analyzing Delaunay refinement algorithms that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and most importantly, helps to solve the difficult problem of meshing nonmanifold domains with small angles. Although small angles inherent in the input geometry cannot be removed, one would like to triangulate a domain without creating any new small angles. Unfortunately, this problem is not always soluble. A compromise is necessary. A Delaunay refinement algorithm is presented that can create a mesh in which most angles are or greater and no angle is smaller than \"!# , where %$'& is the smallest angle separating two segments of the input domain. New angles smaller than appear only near input angles smaller than & ( . In practice, the algorithm’s performance is better than these bounds suggest. Another new result is that Ruppert’s analysis technique can be used to reanalyze one of Chew’s algorithms. Chew proved that his algorithm produces no angle smaller than ) (barring small input angles), but without any guarantees on grading or number of triangles. He conjectures that his algorithm offers such guarantees. His conjecture is conditionally confirmed here: if the angle bound is relaxed to less than &+*-, , Chew’s algorithm produces meshes (of domains without small input angles) that are nicely graded and size-optimal.",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "68f8d261308714abd7e2655edd66d18a",
"text": "In this paper, we present a solution to Moments in Time (MIT) [1] Challenge. Current methods for trimmed video recognition often utilize inflated 3D (I3D) [2] to capture spatial-temporal features. First, we explore off-the-shelf structures like non-local [3], I3D, TRN [4] and their variants. After a plenty of experiments, we find that for MIT, a strong 2D convolution backbone following temporal relation network performs better than I3D network. We then add attention module based on TRN to learn a weight for each relation so that the model can capture the important moment better. We also design uniform sampling over videos and relation restriction policy to further enhance testing performance.",
"title": ""
},
{
"docid": "8cd701723c72b16dfe7d321cb657ee31",
"text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.",
"title": ""
},
{
"docid": "cf0a52fb8b55cf253f560aa8db35717a",
"text": "Big Data though it is a hype up-springing many technical challenges that confront both academic research communities and commercial IT deployment, the root sources of Big Data are founded on data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. Feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when it comes to mining over high dimensional data the search space from which an optimal feature subset is derived grows exponentially in size, leading to an intractable demand in computation. In order to tackle this problem which is mainly based on the high-dimensionality and streaming format of data feeds in Big Data, a novel lightweight feature selection is proposed. The feature selection is designed particularly for mining streaming data on the fly, by using accelerated particle swarm optimization (APSO) type of swarm search that achieves enhanced analytical accuracy within reasonable processing time. In this paper, a collection of Big Data with exceptionally large degree of dimensionality are put under test of our new feature selection algorithm for performance evaluation.",
"title": ""
},
{
"docid": "a42ca90e38f8fcdea60df967c7ca8ecd",
"text": "DDoS defense today relies on expensive and proprietary hardware appliances deployed at fixed locations. This introduces key limitations with respect to flexibility (e.g., complex routing to get traffic to these “chokepoints”) and elasticity in handling changing attack patterns. We observe an opportunity to address these limitations using new networking paradigms such as softwaredefined networking (SDN) and network functions virtualization (NFV). Based on this observation, we design and implement Bohatei, a flexible and elastic DDoS defense system. In designing Bohatei, we address key challenges with respect to scalability, responsiveness, and adversary-resilience. We have implemented defenses for several DDoS attacks using Bohatei. Our evaluations show that Bohatei is scalable (handling 500 Gbps attacks), responsive (mitigating attacks within one minute), and resilient to dynamic adversaries.",
"title": ""
},
{
"docid": "12267eb671d0b7b12f04e8b04637f0b6",
"text": "Monopulse antennas can be used for accurate and rapid angle estimation in radar systems [1]. This paper presents a new kind of monopulse antenna base on two-dimensional elliptical lens. As an example, a patch-fed elliptical lens antenna is designed at 35 GHz. Simulations show the designed lens antenna exhibits clean and symmetrical patterns on both sum and difference ports. A very deep null is achieved in the difference pattern because of the circuit symmetry.",
"title": ""
},
{
"docid": "5eb4ba54e8f1288c8fa9222d664704b1",
"text": "Common Information Model (CIM) is widely adopted by many utilities since it offers interoperability through standard information models. Storing, processing, retrieving, and providing concurrent access of the large power network models to the various power system applications in CIM framework are the current challenges faced by utility operators. As the power network models resemble largely connected-data sets, the design of CIM oriented database has to support high-speed data retrieval of the connected-data and efficient storage for processing. The graph database is gaining wide acceptance for storing and processing of largely connected-data for various applications. This paper presents a design of CIM oriented graph database (CIMGDB) for storing and processing the largely connected-data of power system applications. Three significant advantages of the CIMGDB are efficient data retrieval and storage, agility to adapt dynamic changes in CIM profile, and greater flexibility of modeling CIM unified modeling language (UML) in GDB. The CIMGDB does not need a predefined database schema. Therefore, the CIM semantics needs to be added to the artifacts of GDB for every instance of CIM objects storage. A CIM based object-graph mapping methodology is proposed to automate the process. An integration of CIMGDB and power system applications is discussed by an implementation architecture. The data-intensive network topology processing (NTP) is implemented, and demonstrated for six IEEE test networks and one practical 400 kV Maharashtra network. Results such as computation time of executing network topology processing evaluate the performance of the CIMGDB.",
"title": ""
},
{
"docid": "99cd180d0bb08e6360328b77219919c1",
"text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.",
"title": ""
},
{
"docid": "a6acba54f34d1d101f4abb00f4fe4675",
"text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.",
"title": ""
},
{
"docid": "74fb666c47afc81b8e080f730e0d1fe0",
"text": "In current commercial Web search engines, queries are processed in the conjunctive mode, which requires the search engine to compute the intersection of a number of posting lists to determine the documents matching all query terms. In practice, the intersection operation takes a significant fraction of the query processing time, for some queries dominating the total query latency. Hence, efficient posting list intersection is critical for achieving short query latencies. In this work, we focus on improving the performance of posting list intersection by leveraging the compute capabilities of recent multicore systems. To this end, we consider various coarse-grained and fine-grained parallelization models for list intersection. Specifically, we present an algorithm that partitions the work associated with a given query into a number of small and independent tasks that are subsequently processed in parallel. Through a detailed empirical analysis of these alternative models, we demonstrate that exploiting parallelism at the finest-level of granularity is critical to achieve the best performance on multicore systems. On an eight-core system, the fine-grained parallelization method is able to achieve more than five times reduction in average query processing time while still exploiting the parallelism for high query throughput.",
"title": ""
},
{
"docid": "c16499b3945603d04cf88fec7a2c0a85",
"text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.",
"title": ""
},
{
"docid": "b740fd9a56701ddd8c54d92f45895069",
"text": "In vivo imaging of apoptosis in a preclinical setting in anticancer drug development could provide remarkable advantages in terms of translational medicine. So far, several imaging technologies with different probes have been used to achieve this goal. Here we describe a bioluminescence imaging approach that uses a new formulation of Z-DEVD-aminoluciferin, a caspase 3/7 substrate, to monitor in vivo apoptosis in tumor cells engineered to express luciferase. Upon apoptosis induction, Z-DEVD-aminoluciferin is cleaved by caspase 3/7 releasing aminoluciferin that is now free to react with luciferase generating measurable light. Thus, the activation of caspase 3/7 can be measured by quantifying the bioluminescent signal. Using this approach, we have been able to monitor caspase-3 activation and subsequent apoptosis induction after camptothecin and temozolomide treatment on xenograft mouse models of colon cancer and glioblastoma, respectively. Treated mice showed more than 2-fold induction of Z-DEVD-aminoluciferin luminescent signal when compared to the untreated group. Combining D-luciferin that measures the total tumor burden, with Z-DEVD-aminoluciferin that assesses apoptosis induction via caspase activation, we confirmed that it is possible to follow non-invasively tumor growth inhibition and induction of apoptosis after treatment in the same animal over time. Moreover, here we have proved that following early apoptosis induction by caspase 3 activation is a good biomarker that accurately predicts tumor growth inhibition by anti-cancer drugs in engineered colon cancer and glioblastoma cell lines and in their respective mouse xenograft models.",
"title": ""
},
{
"docid": "748ae7abfd8b1dfb3e79c94c5adace9d",
"text": "Users routinely access cloud services through third-party apps on smartphones by giving apps login credentials (i.e., a username and password). Unfortunately, users have no assurance that their apps will properly handle this sensitive information. In this paper, we describe the design and implementation of ScreenPass, which significantly improves the security of passwords on touchscreen devices. ScreenPass secures passwords by ensuring that they are entered securely, and uses taint-tracking to monitor where apps send password data. The primary technical challenge addressed by ScreenPass is guaranteeing that trusted code is always aware of when a user is entering a password. ScreenPass provides this guarantee through two techniques. First, ScreenPass includes a trusted software keyboard that encourages users to specify their passwords' domains as they are entered (i.e., to tag their passwords). Second, ScreenPass performs optical character recognition (OCR) on a device's screenbuffer to ensure that passwords are entered only through the trusted software keyboard. We have evaluated ScreenPass through experiments with a prototype implementation, two in-situ user studies, and a small app study. Our prototype detected a wide range of dynamic and static keyboard-spoofing attacks and generated zero false positives. As long as a screen is off, not updated, or not tapped, our prototype consumes zero additional energy; in the worst case, when a highly interactive app rapidly updates the screen, our prototype under a typical configuration introduces only 12% energy overhead. Participants in our user studies tagged their passwords at a high rate and reported that tagging imposed no additional burden. Finally, a study of malicious and non-malicious apps running under ScreenPass revealed several cases of password mishandling.",
"title": ""
},
{
"docid": "467f7ac9d8f52b9b82257e736910fab6",
"text": "The manual assessment of activities of daily living (ADLs) is a fundamental problem in elderly care. The use of miniature sensors placed in the environment or worn by a person has great potential in effective and unobtrusive long term monitoring and recognition of ADLs. This paper presents an effective and unobtrusive activity recognition system based on the combination of the data from two different types of sensors: RFID tag readers and accelerometers. We evaluate our algorithms on non-scripted datasets of 10 housekeeping activities performed by 12 subjects. The experimental results show that recognition accuracy can be significantly improved by fusing the two different types of sensors. We analyze different acceleration features and algorithms, and based on tag detections we suggest the best tagspsila placements and the key objects to be tagged for each activity.",
"title": ""
},
{
"docid": "af5a2ad28ab61015c0344bf2e29fe6a7",
"text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.",
"title": ""
}
] |
scidocsrr
|
a939c397511bfbee0756d0b02c14936d
|
Clustering With Side Information: From a Probabilistic Model to a Deterministic Algorithm
|
[
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
}
] |
[
{
"docid": "f6cb9bfd79fbee8bff0a2f6ad0bca705",
"text": "Neuroendocrine neoplasms are detected very rarely in pregnant women. The following is a case report of carcinoid tumor of the appendix diagnosed in 28 year-old woman at 25th week of gestation. The woman delivered spontaneously a healthy baby at the 38th week of gestation. She did not require adjuvant therapy with somatostatin analogues. The patient remained in remission. There are not established standards of care due to the very rare incidence of carcinoid tumors in pregnancy. A review of the literature related to management and prognosis in such cases was done.",
"title": ""
},
{
"docid": "ffef173f4e0c757c6d780d0af5d9c00b",
"text": "Minding the Body, the Primordial Communication Medium Embodiment: The Teleology of Interface Design Embodiment: Thinking through our Technologically Extended Bodies User Embodiment and Three Forms in Which the Body \"Feels\" Present in the Virtual Environment Presence: Emergence of a Design Goal and Theoretical Problem Being There: The Sense of Physical Presence in Cyberspace Being with another Body: Designing the Illusion of Social Presence Is This Body Really \"Me\"? Self Presence, Body Schema, Self-consciousness, and Identity The Cyborg's Dilemma Footnotes References About the Author The intrinsic relationship that arises between tools and organs, and one that is to be revealed and emphasized – although it is more one of unconscious discovery than of conscious invention – is that in the tool the human continually produces itself. Since the organ whose utility and power is to be increased is the controlling factor, the appropriate form of a tool can be derived only from that organ. Ernst Kapp, 1877, quoted in [Mitcham, 1994, p. 23] Abstract StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR...StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html How does the changing representation of the body in virtual environments affect the mind? This article considers how virtual reality interfaces are evolving to embody the user progressively. The effect of embodiment on the sensation of physical presence, social presence, and self presence in virtual environments is discussed. The effect of avatar representation on body image and body schema distortion is also considered. The paper ends with the introduction of the cyborg's dilemma, a paradoxical situation in which the development of increasingly \"natural\" and embodied interfaces leads to \"unnatural\" adaptations or changes in the user. In the progressively tighter coupling of user to interface, the user evolves as a cyborg. Minding the Body, the Primordial Communication Medium In the twentieth century we have made a successful transition from the sooty iron surfaces of the industrial revolution to the liquid smooth surfaces of computer graphics. On our computer monitors we may be just beginning to see a reflective surface that looks increasingly like a mirror. In the virtual world that exists on the other side of the mirror's surface we can just barely make out the form of a body that looks like us, like another self. Like Narcissus looking into the pond, we are captured by the experience of this reflection of our bodies. But that reflected body looks increasingly like a cyborg. [2] This article explores an interesting pattern in media interface development that I will call progressive embodiment. Each progressive step in the development of sensor and display technology moves telecommunication technology towards a tighter coupling of the body to the interface. The body is becoming present in both physical space and cyberspace. The interface is adapting to the body; the body is adapting to the interface [(Biocca & Rolland, in press)]. Why is this occurring? One argument is that attempts to optimize the communication bandwidth of distributed, multi-user virtual environments such as social VRML worlds and collaborative virtual environments drives this steady augmentation of the body and the mind [(see Biocca, 1995)]. It has become a key to future stages of interface development. On the other hand, progressive embodiment may be part of a larger pattern, the cultural evolution of humans and communication artifacts towards a mutual integration and greater \"somatic flexibility\" [(Bateson, 1972)]. The pattern of progressive embodiment raises some fundamental and interesting questions. In this article we pause to consider these developments. New media like distributed immersive virtual environments sometimes force us to take a closer look at what is fundamental about communication. Inevitably, theorists interested in the fundamentals of communication return in some way or another to a discussion of the body and the mind. At the birth of new media, theories dwell on human factors in communication [(Biocca, 1995)] and are often more psychological than sociological. For example when radio and film appeared, [Arnheim (1957)] and [Munsterberg (1916)] used the perceptual theories of Gestalt psychology to try to make sense of how each medium affected the senses. In the 1960s McLuhan [(1966; McLuhan & McLuhan, 1988)] refocused our attention on media technology when he assembled a controversial psychological theory to examine electronic media and make pronouncements about the consequences of imbalances in the \"sensorium.\" Page 2 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html Before paper, wires, and silicon, the primordial communication medium is the body. At the center of all communication rests the body, the fleshy gateway to the mind. [Becker & Schoenbach (1989)] argue that \"a veritable 'new mass medium' for some experts, has to address new senses of new combinations of senses. It has to use new channels of information\" (p. 5). In other words, each new medium must somehow engage the body in a new way. But this leads us to ask, are all the media collectively addressing the body in some systematic way? Are media progressively embodying the user? 1.1 The senses as channels to the mind \"Each of us lives within ... the prison of his own brain. Projecting from it are millions of fragile sensory nerve fibers, in groups uniquely adapted to sample the energetic states of the world around us: heat, light, force, and chemical composition. That is all we ever know of it directly; all else is logical inference (1975, p. 131) [(see Sekuler & Blake, 1994 p. 2)]. The senses are the portals to the mind. Sekuler and Blake extend their observation to claim that the senses are \"communication channels to reality.\" Consider for a moment the body as an information acquisition system. As aliens from some distant planet we observe humans and see the body as an array of sensors propelled through space to scan, rub, and grab the environment. In some ways, that is how virtual reality designers see users [(Durlach & Mavor, 1994)]. Many immersive virtual reality designers tend to be implicitly or explicitly Gibsonian: they accept the perspective of the noted perceptual psychologist [J.J. Gibson (1966, 1979)]. Immersive virtual environments are places where vision and the other senses are meant to be active. Users make use of the affordances in the environments from which they perceive the structure of the virtual world in ways similar to the manner they construct the physical world. Through motion and collisions with objects the senses pick up invariances in energy fields flowing over the body's receptors. When we walk or reach for an object in the virtual or physical world, we guide the senses in this exploration of the space in same way that a blind man stretches out a white cane to explore the space while in motion. What we know about the world is embodied, it is constructed from patterns of energy detected by the body. The body is the surface on which all energy fields impinge, on which communication and telecommunication takes form. 1.2 The body as a display device for a mind The body is integrated with the mind as a representational system, or as the neuroscientist, Antonio Damasio, puts it, \"a most curious physiological arrangement ... has turned the brain into the body's captive audience\" [(Damasio, 1994, p. xv)]. In some ways, the body is a primordial display device, a kind of internal mental simulator. The body is a representational medium for the mind. Some would claim that thought is embodied or modeled by the body. Johnson and Lakoff [(Johnson, 1987; Lakoff & Johnson, 1980; Lakoff, 1987)] argue against a view of reasoning as manipulation of prepositional representations (the \"objectives position\"), a tabulation and manipulation of abstract symbols. They might suggest a kind of sensory-based \"image schemata\" that are critical to instantiating mental transformations associated with metaphor and analogy. In a way virtual environments are objectified metaphors and analogies delivered as sensory patterns instantiating \"image schemata.\" In his book, Decartes' Error, the neuroscientist Damasio explains how the body is used as a means of embodying thought: Page 3 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html \"...the body as represented in the brain, may constitute the indispensable frame of reference for the neural processes that we experience as the mind; that our very organism rather than some absolute experiential reality is used as the ground of reference for the constructions we make of the world around us and for the construction of the ever-present sense of subjectivity that is part and parcel of our experiences; that our most refined thoughts and best actions, our greatest joys and deepest sorrows, use the body as a yardstick\" [(Damasio, 1994, p. xvi)]. Damasio's title, Descartes' Error, warns against the misleading tendency to think of the body and mind, reason and emotion, as separate systems. Figure 1. Range of possible input (sensors) and output (effectors) devices for a virtual reality system. Illustrates the pattern of progressive embodiment in virtual reality systems. Source: Biocca & Delaney, 1995 1.3 The body as a communication device The body is also an expressive communication device [(Benthall & Polhemus, 1975)], a social semiotic vehicle for representing mental states (e.g., emotions, observations, plans, etc.)",
"title": ""
},
{
"docid": "e0d71b84c14d95b0871090666773975c",
"text": "Camera-based remote pulse rate monitoring can be used during fitness exercise to optimize the effectiveness of a workout. However, such monitoring suffers from vigorous body motions and dynamic illumination changes due to exercise, which may lead to erroneous estimates. To better cope with this, we propose a quality metric, comprised of a front-end metric and a back-end metric, to indicate the monitoring conditions (e.g. luminance, skin property) and assess the reliability of pulse rate measurement (e.g. signal quality). The proposed quality metric has been thoroughly benchmarked on 78 videos recorded in a fitness setting. The experimental results show that (i) appropriate light source intensity variation and its angle variation in the front-end metric are critical indicators for pulse rate measurement accuracy, and (ii) the back-end metric can effectively indicate/reject unreliable estimates. The proposed method in this paper is the first quality metric for camera-based pulse rate monitoring, validated for the challenging use-case of fitness exercises.",
"title": ""
},
{
"docid": "43fc501b2bf0802b7c1cc8c4280dcd85",
"text": "We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.",
"title": ""
},
{
"docid": "a2617ce3b0d618a5e4b61033345d59b7",
"text": "Asymmetry of the eyelid crease is a major complication following double eyelid blepharoplasty; the reasons are multivariate. This study presents, for the first time, a novel method, based on high-definition magnetic resonance imaging and high-precision weighing of tissue, for quantitating preoperative asymmetry of eyelid thickness in young Chinese women presenting for blepharoplasty. From 1 January 2008 to 1 October 2011, we studied 1217 women requesting double eyelid blepharoplasty. The patients ranged in age from 17 to 24 years (average 21.13 years). All patients were of Chinese Han nationality. Soft-tissue thickness at the tarsal plate superior border was 5.05 ± 1.01 units on the right side and 4.12 ± 0.96 units on the left. The submuscular fibro-adipose tissue area was 95.12 ± 23.27 unit(2) on the right side and 76.05 ± 21.11 unit(2) on the left. The pre-aponeurotic fat pad area was 112.33 ± 29.16 unit(2) on the right side and 91.25 ± 27.32 unit(2) on the left. The orbicularis muscle resected weighed 0.185 ± 0.055 g on the right side and 0.153 ± 0.042 g on the left; the orbital fat resected weighed 0.171 ± 0.062 g on the right side and 0.106 ± 0.057 g on the left. In conclusion, upper eyelid thickness asymmetry is a common phenomenon in young Chinese women who wish to undertake double eyelid blepharoplasty. We have demonstrated that the orbicularis muscle and orbital fat pad are consistently thicker on the right side than on the left.",
"title": ""
},
{
"docid": "a3d32ccd0e461c3d47dbec0fb12398fa",
"text": "Ever increasing societal demands for uninterrupted work are causing unparalleled amounts of sleep deprivation among workers. Sleep deprivation has been linked to safety problems ranging from medical misdiagnosis to industrial and vehicular accidents. Microsleeps (very brief intrusions of sleep into wakefulness) are usually cited as the cause of the performance decrements during sleep deprivation. Changes in a more basic physiological phenomenon, attentional shift, were hypothesized to be additional factors in performance declines. The current study examined the effects of 36 hours of sleep deprivation on the electrodermal-orienting response (OR), a measure of attentional shift or capture. Subjects were 71 male undergraduate students, who were divided into sleep deprivation and control (non-sleep deprivation) groups. The expected negative effects of sleep deprivation on performance were noted in increased reaction times and increased variability in the sleep-deprived group on attention-demanding cognitive tasks. OR latency was found to be significantly delayed after sleep deprivation, OR amplitude was significantly decreased, and habituation of the OR was significantly faster during sleep deprivation. These findings indicate impaired attention, the first revealing slowed shift of attention to novel stimuli, the second indicating decreased attentional allocation to stimuli, and the third revealing more rapid loss of attention to repeated stimuli. These phenomena may be factors in the impaired cognitive performance seen during sleep deprivation.",
"title": ""
},
{
"docid": "d5ea5a0b9484f6b728be4a4a6092c419",
"text": "In response to the rise of Big Data, modern enterprise architecture has become significantly more complex. Model driven engineering (MDE) has been proposed as a methodology for developing software to deal with complex integration and interoperability. Domain specific languages (DSLs) play a crucial role in MDE and represent languages for a specific purpose that are highly abstract and easy to use. In this paper we propose a new language VizDSL for creating interactive visualisations that facilitate the understanding of complex data and information structures for enterprise systems interoperability. In comparison to existing visualisation languages VizDSL provides the benefits of visualising the semantics of data using a graphical notation. VizDSL is based on the Interaction Flow Modelling Language (IFML) and Agile Visualisation and has been implemented in a prototype. The prototype has been applied on an open data set and results show that interactive visualisation can be implemented quickly using the VizDSL language without writing code which makes it easier to design for non- programmers.",
"title": ""
},
{
"docid": "138076873adee11f2701a1efc08d30ef",
"text": "We present a global controller for tracking nominal trajectories with a flying wing tailsitter vehicle. The control strategy is based on a first-principles model of the vehicle dynamics that captures all relevant aerodynamic effects, and we apply an onboard parameter learning scheme in order to estimate unknown aerodynamic parameters. A cascaded control architecture is used: Based on position and velocity errors an outer control loop computes a desired attitude keeping the vehicle in coordinated flight, while an inner control loop tracks the desired attitude using a lookup table with precomputed optimal attitude trajectories. The proposed algorithms can be implemented on a typical microcontroller and the performance is demonstrated in various experiments.",
"title": ""
},
{
"docid": "fda37e6103f816d4933a3a9c7dee3089",
"text": "This paper introduces a novel approach to estimate the systolic and diastolic blood pressure ratios (SBPR and DBPR) based on the maximum amplitude algorithm (MAA) using a Gaussian mixture regression (GMR). The relevant features, which clearly discriminate the SBPR and DBPR according to the targeted groups, are selected in a feature vector. The selected feature vector is then represented by the Gaussian mixture model. The SBPR and DBPR are subsequently obtained with the help of the GMR and then mapped back to SBP and DBP values that are more accurate than those obtained with the conventional MAA method.",
"title": ""
},
{
"docid": "7f3c453d52b100245b67c87e992f4bfa",
"text": "In this work, a frequency-based model is presented to examine limit cycle and spurious behavior in a bang-bang all-digital phase locked loop (BB-ADPLL). The proposed model considers different type of nonlinearities such as quantization effects of the digital controlled oscillator (DCO), quantization effects of the bang-bang phase detector (BB-PD) in noiseless BB-ADPLLs by a proposed novel discrete-time model. In essence, the traditional phase-locked model is transformed into a frequency-locked topology equivalent to a sigma delta modulator (SDM) with a dc-input which represents frequency deviation in phase locked state. The frequency deviation must be introduced and placed correctly within the proposed model to enable the accurate prediction of limit cycles. Thanks to the SDM-like topology, traditional techniques used in the SDM nonlinear analysis such as the discrete describing function (DDF) and number theory can be applied to predict limit cycles in first and second-order BB-ADPLLs. The inherent DCO and reference phase noise can also be easily integrated into the proposed model to accurately predict their effect on the stability of the limit cycle. The results obtained from the proposed model show good agreement with time-domain simulations.",
"title": ""
},
{
"docid": "01eadabcfbe9274c47d9ebcd45ea2332",
"text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "2ac1d3ce029f547213c122c0e84650b2",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to cs229-qa@cs.stanford.edu with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "19f08f2e9dd22bb2779ded2ad9cd19d4",
"text": "In this paper, a new algorithm for Vehicle Logo Recognition is proposed, on the basis of an enhanced Scale Invariant Feature Transform (Merge-SIFT or M-SIFT). The algorithm is assessed on a set of 1500 logo images that belong to 10 distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1500 images to a training set (database) and to a testing set (query). It is shown that the MSIFT approach, which is proposed in this paper, boosts the recognition accuracy compared to the standard SIFT method. The reported results indicate an average of 94.6% true recognition rate in vehicle logos, while the processing time remains low (~0.8sec).",
"title": ""
},
{
"docid": "cd7fa5de19b12bdded98f197c1d9cd22",
"text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.",
"title": ""
},
{
"docid": "786540fad61e862657b778eb57fe1b24",
"text": "OBJECTIVE\nTo compare pharmacokinetics (PK) and pharmacodynamics (PD) of insulin glargine in type 2 diabetes mellitus (T2DM) after evening versus morning administration.\n\n\nRESEARCH DESIGN AND METHODS\nTen T2DM insulin-treated persons were studied during 24-h euglycemic glucose clamp, after glargine injection (0.4 units/kg s.c.), either in the evening (2200 h) or the morning (1000 h).\n\n\nRESULTS\nThe 24-h glucose infusion rate area under the curve (AUC0-24h) was similar in the evening and morning studies (1,058 ± 571 and 995 ± 691 mg/kg × 24 h, P = 0.503), but the first 12 h (AUC0-12h) was lower with evening versus morning glargine (357 ± 244 vs. 593 ± 374 mg/kg × 12 h, P = 0.004), whereas the opposite occurred for the second 12 h (AUC12-24h 700 ± 396 vs. 403 ± 343 mg/kg × 24 h, P = 0.002). The glucose infusion rate differences were totally accounted for by different rates of endogenous glucose production, not utilization. Plasma insulin and C-peptide levels did not differ in evening versus morning studies. Plasma glucagon levels (AUC0-24h 1,533 ± 656 vs. 1,120 ± 344 ng/L/h, P = 0.027) and lipolysis (free fatty acid AUC0-24h 7.5 ± 1.6 vs. 8.9 ± 1.9 mmol/L/h, P = 0.005; β-OH-butyrate AUC0-24h 6.8 ± 4.7 vs. 17.0 ± 11.9 mmol/L/h, P = 0.005; glycerol, P < 0.020) were overall more suppressed after evening versus morning glargine administration.\n\n\nCONCLUSIONS\nThe PD of insulin glargine differs depending on time of administration. With morning administration insulin activity is greater in the first 0-12 h, while with evening administration the activity is greater in the 12-24 h period following dosing. However, glargine PK and plasma C-peptide levels were similar, as well as glargine PD when analyzed by 24-h clock time independent of the time of administration. Thus, the results reflect the impact of circadian changes in insulin sensitivity in T2DM (lower in the night-early morning vs. afternoon hours) rather than glargine per se.",
"title": ""
},
{
"docid": "e9ba4e76a3232e25233a4f5fe206e8ba",
"text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.",
"title": ""
},
{
"docid": "143e680453569b93dfd3ed514c30cd3c",
"text": "Cercariaeum crassum Wesenberg-Lund, 1934 is redescribed at the cercariaeum stage and the daughter-rediae and cercaria are also described on the basis of new material from Pisidium amnicum collected in the Liikasepuro River (eastern Finland). The species is allocated to the family Allocreadiidae, although its generic affiliation remains unknown. The probable life-cycle (based on the developmental stages observed in daughter-redia) appears to eliminate the cercarial stage and, instead, a cercariaeum (a type of cercaria without a tail) may develop directly from germ balls or, rarely, through the stages of an ophthalmoxiphidiocercaria that transforms into a young caudate cercariaeum. Their morphology and development are shown to be consistent with the family Allocreadiidae. The probable lack of a second intermediate host in the life-cycle is discussed.",
"title": ""
},
{
"docid": "9cb2f99aa1c745346999179132df3854",
"text": "As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification.",
"title": ""
}
] |
scidocsrr
|
94a6fc25cb92c07cb9c2e24382afe6c1
|
Multi-agent Diverse Generative Adversarial Networks
|
[
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "63eaccbbf34bc68cefa119056d488402",
"text": "Interactive Image Generation User edits Generated images User edits Generated images User edits Generated images [1] Zhu et al. Learning a Discriminative Model for the Perception of Realism in Composite Images. ICCV 2015. [2] Goodfellow et al. Generative Adversarial Nets. NIPS 2014 [3] Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR 2016 Reference : Natural images 0, I , Unif 1, 1",
"title": ""
},
{
"docid": "66a1a943580cdd300f9579e80f258a2e",
"text": "The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
}
] |
[
{
"docid": "d0043eb45257f9eed6d874f4c7aa709c",
"text": "We report the results of our classification-based machine translation model, built upon the framework of a recurrent neural network using gated recurrent units. Unlike other RNN models that attempt to maximize the overall conditional log probability of sentences against sentences, our model focuses a classification approach of estimating the conditional probability of the next word given the input sequence. This simpler approach using GRUs was hoped to be comparable with more complicated RNN models, but achievements in this implementation were modest and there remains a lot of room for improving this classification approach.",
"title": ""
},
{
"docid": "e84e8acb3adb83fefd6349b843fa3955",
"text": "The research of spatial data is in its infancy stage and there is a need for an accurate method for rule mining. Association rule mining searches for interesting relationships among items in a given data set. This paper enables us to extract pattern from spatial database using k-means algorithm which refers to patterns not explicitly stored in spatial databases. Since spatial association mining needs to evaluate multiple spatial relationships among a large number of spatial objects, the process could be quite costly. An interesting mining optimization method called progressive refinement can be adopted in spatial association analysis. The method first mines large data sets roughly using a fast algorithm and then improves the quality of mining in a pruned data set. The k-means algorithm randomly selects k number of objects, each of which initially represents a cluster mean or center. For each of the remaining objects, an object is assigned to the cluster to which it is most similar, based on the distance between the object and the cluster mean. Then it computes new mean for each cluster. This process iterates until the criterion function converges. The above concept is applied in the area of agriculture where giving the temperature and the rainfall as the initial spatial data and then by analyzing the agricultural meteorology for the enhancement of crop yields and also reduce the crop losses.",
"title": ""
},
{
"docid": "479e071e63a3d4a64b11aa21b7c591d5",
"text": "BACON.5 is a program that discovers empirical laws. The program represents information at varying levels of description, with higher levels summarizing the levels below them. The system applies a small set of data-driven heuristics to detect regularities in numeric and nominal data. These heuristics note constancies and trends, leading BACONS to formulate hypotheses, define theoretical terms, and postulate intrinsic properties. Once the program has formulated an hypothesis, it' uses this to reduce the amount of data it must consider at later times. A simple type of reasoning by analogy also simplifies the discovery of laws containing symmetric forms. These techniques have allowed the system to rediscover Snail's law of refraction, conservation of momentum, Black's specific heat law, and Joule's formulation of conservation of energy. Thus, BACON.S'S heuristics appear to be general mechanisms applicable to discovery in diverse domains.",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "d1ab899118a6700d43e7d86ebf5bd19b",
"text": "Taking full advantage of the high resistivity substrate and underlying oxide of SOI technology, a high performance CMOS SPDT T/R switch has been designed and fabricated in a partially depleted, 0.25µm SOI process. The targeted Bluetooth class II specifications have been fully fitted. The switch over the high resistivity substrate exhibits a 0.7dB insertion loss and a 50dB isolation at 2.4GHz; at 5GHz insertion loss and isolation are 1dB and 47dB respectively. The measured ICP1dBis +12dBm.",
"title": ""
},
{
"docid": "2cdf3656b0257fb1eb849a3b1521bde4",
"text": "A major barrier to the adoption of cloud Infrastructure-as-aService (IaaS) is collaboration, where multiple tenants engage in collaborative tasks requiring resources to be shared across tenant boundaries. Currently, cloud IaaS providers focus on multi-tenant isolation, and offer limited or no cross-tenant access capabilities in their IaaS APIs. In this paper, we present a novel attribute-based access control (ABAC) model to enable collaboration between tenants in a cloud IaaS, as well as more generally. Our approach allows cross-tenant attribute assignment to provide access to shared resources across tenants. Particularly, our tenanttrust authorizes a trustee tenant to assign its attributes to users from a trustor tenant, enabling access to the trustee tenant’s resources. We designate our multi-tenant attribute-based access control model as MTABAC. Previously, a multi-tenant role-based access control (MT-RBAC) model has been defined in the literature wherein a trustee tenant can assign its roles to users from a trustor tenant. We demonstrate that MTABAC can be configured to enforce MT-RBAC thus subsuming it as a",
"title": ""
},
{
"docid": "d91077f97e745cdd73315affb5cbbdd2",
"text": "We consider the problem of learning the underlying graph of an unknown Ising model on p spins from a collection of i.i.d. samples generated from the model. We suggest a new estimator that is computationally efficient and requires a number of samples that is near-optimal with respect to previously established informationtheoretic lower-bound. Our statistical estimator has a physical interpretation in terms of “interaction screening”. The estimator is consistent and is efficiently implemented using convex optimization. We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is logarithmic in the system size p and exponential in the maximum coupling-intensity and maximum node-degree.",
"title": ""
},
{
"docid": "abcbd831178e1bc5419da8274dc17bbf",
"text": "Most state-of-the-art statistical machine translation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maximize a translation quality metric, using heldout test sentences and their corresponding reference translations. However, obtaining reference translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more human translations in order to significantly improve on a single translation augmented by monolingual paraphrases.",
"title": ""
},
{
"docid": "abe4b6d122d4d13374d70a886906aba7",
"text": "A 100-MHz PWM fully integrated buck converter utilizing standard package bondwire as power inductor with enhanced light-load efficiency which occupies 2.25 mm2 in 0.13-μm CMOS is presented. Standard package bondwire instead of on-chip spiral metal or special spiral bondwire is implemented as power inductor to minimize the cost and the conduction loss of an integrated inductor. The accuracy requirement of bondwire inductance is relaxed by an extra discontinuous-conduction-mode (DCM) calibration loop, which solves the precise DCM operation issue of fully integrated converters and eliminates the reverse current-related loss, thus enabling the use of standard package bondwire inductor with various packaging techniques. Optimizations of the power transistors, the input decoupling capacitor (CI), and the controller are also presented to achieve an efficient and robust high-frequency design. With all three major power losses, conduction loss, switching loss, and reverse current related loss, optimized or eliminated, the efficiency is significantly improved. An efficiency of 74.8% is maintained at 10 mA, and a peak efficiency of 84.7% is measured at nominal operating conditions with a voltage conversion of 1.2 to 0.9 V. Converters with various bondwire inductances from 3 to 8.5 nH are measured to verify the reliability and compatibility of different packaging techniques.",
"title": ""
},
{
"docid": "552baf04d696492b0951be2bb84f5900",
"text": "We examined whether reduced perceptual specialization underlies atypical perception in autism spectrum disorder (ASD) testing classifications of stimuli that differ either along integral dimensions (prototypical integral dimensions of value and chroma), or along separable dimensions (prototypical separable dimensions of value and size). Current models of the perception of individuals with an ASD would suggest that on these tasks, individuals with ASD would be as, or more, likely to process dimensions as separable, regardless of whether they represented separable or integrated dimensions. In contrast, reduced specialization would propose that individuals with ASD would respond in a more integral manner to stimuli that differ along separable dimensions, and at the same time, respond in a more separable manner to stimuli that differ along integral dimensions. A group of nineteen adults diagnosed with high functioning ASD and seventeen typically developing participants of similar age and IQ, were tested on speeded and restricted classifications tasks. Consistent with the reduced specialization account, results show that individuals with ASD do not always respond more analytically than typically developed (TD) observers: Dimensions identified as integral for TD individuals evoke less integral responding in individuals with ASD, while those identified as separable evoke less analytic responding. These results suggest that perceptual representations are more broadly tuned and more flexibly represented in ASD. Autism Res 2017, 10: 1510-1522. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "56245b600dd082439d2b1b2a2452a6b7",
"text": "The electric drive systems used in many industrial applications require higher performance, reliability, variable speed due to its ease of controllability. The speed control of DC motor is very crucial in applications where precision and protection are of essence. Purpose of a motor speed controller is to take a signal representing the required speed and to drive a motor at that speed. Microcontrollers can provide easy control of DC motor. Microcontroller based speed control system consist of electronic component, microcontroller and the LCD. In this paper, implementation of the ATmega8L microcontroller for speed control of DC motor fed by a DC chopper has been investigated. The chopper is driven by a high frequency PWM signal. Controlling the PWM duty cycle is equivalent to controlling the motor terminal voltage, which in turn adjusts directly the motor speed. This work is a practical one and high feasibility according to economic point of view and accuracy. In this work, development of hardware and software of the close loop dc motor speed control system have been explained and illustrated. The desired objective is to achieve a system with the constant speed at any load condition. That means motor will run at a fixed speed instead of varying with amount of load. KeywordsDC motor, Speed control, Microcontroller, ATmega8, PWM.",
"title": ""
},
{
"docid": "d5e27463c14210420833554438f05ed3",
"text": "During development, the healthy human brain constructs a host of large-scale, distributed, function-critical neural networks. Neurodegenerative diseases have been thought to target these systems, but this hypothesis has not been systematically tested in living humans. We used network-sensitive neuroimaging methods to show that five different neurodegenerative syndromes cause circumscribed atrophy within five distinct, healthy, human intrinsic functional connectivity networks. We further discovered a direct link between intrinsic connectivity and gray matter structure. Across healthy individuals, nodes within each functional network exhibited tightly correlated gray matter volumes. The findings suggest that human neural networks can be defined by synchronous baseline activity, a unified corticotrophic fate, and selective vulnerability to neurodegenerative illness. Future studies may clarify how these complex systems are assembled during development and undermined by disease.",
"title": ""
},
{
"docid": "d15add461f0ca58de13b3dc975f7fef7",
"text": "A frequency compensation technique improving characteristic of power supply rejection ratio (PSRR) for two-stage operational amplifiers is presented. This technique is applicable to most known two-stage amplifier configurations. The detailed small-signal analysis of an exemplary amplifier with the proposed compensation and a comparison to its basic version reveal several benefits of the technique which can be effectively exploited in continuous-time filter designs. This comparison shows the possibility of PSRR bandwidth broadening of more than a decade, significant reduction of chip area, the unity-gain bandwidth and power consumption improvement. These benefits are gained at the cost of a non-monotonic phase characteristic of the open-loop differential voltage gain and limitation of a close-loop voltage gain. A prototype-integrated circuit, fabricated based on 0.35 mm complementary metal-oxide semiconductor technology, was used for the technique verification. Two pairs of amplifiers with the classical Miller compensation and a cascoded input stage were measured and compared to their improved counterparts. The measurement data fully confirm the theoretically predicted advantages of the proposed compensation technique.",
"title": ""
},
{
"docid": "d34bfe5e6c374763f5fdf1987e4ea8ce",
"text": "BACKGROUND\nIt is not clear whether relaxation therapies are more or less effective than cognitive and behavioural therapies in the treatment of anxiety. The aims of the present study were to examine the effects of relaxation techniques compared to cognitive and behavioural therapies in reducing anxiety symptoms, and whether they have comparable efficacy across disorders.\n\n\nMETHOD\nWe conducted a meta-analysis of 50 studies (2801 patients) comparing relaxation training with cognitive and behavioural treatments of anxiety.\n\n\nRESULTS\nThe overall effect size (ES) across all anxiety outcomes, with only one combined ES in each study, was g = -0.27 [95% confidence interval (CI) = -0.41 to -0.13], favouring cognitive and behavioural therapies (number needed to treat = 6.61). However, no significant difference between relaxation and cognitive and behavioural therapies was found for generalized anxiety disorder, panic disorder, social anxiety disorder and specific phobias (considering social anxiety and specific phobias separately). Heterogeneity was moderate (I2 = 52; 95% CI = 33-65). The ES was significantly associated with age (p < 0.001), hours of cognitive and/or behavioural therapy (p = 0.015), quality of intervention (p = 0.007), relaxation treatment format (p < 0.001) and type of disorder (p = 0.008), explaining an 82% of variance.\n\n\nCONCLUSIONS\nRelaxation seems to be less effective than cognitive and behavioural therapies in the treatment of post-traumatic stress disorder, and obsessive-compulsive disorder and it might also be less effective at 1-year follow-up for panic, but there is no evidence that it is less effective for other anxiety disorders.",
"title": ""
},
{
"docid": "095fa44019b071dc842779a7f22a2f8a",
"text": "The high-voltage gain converter is widely employed in many industry applications, such as photovoltaic systems, fuel cell systems, electric vehicles, and high-intensity discharge lamps. This paper presents a novel single-switch high step-up nonisolated dc-dc converter integrating coupled inductor with extended voltage doubler cell and diode-capacitor techniques. The proposed converter achieves extremely large voltage conversion ratio with appropriate duty cycle and reduction of voltage stress on the power devices. Moreover, the energy stored in leakage inductance of coupled inductor is efficiently recycled to the output, and the voltage doubler cell also operates as a regenerative clamping circuit, alleviating the problem of potential resonance between the leakage inductance and the junction capacitor of output diode. These characteristics make it possible to design a compact circuit with high static gain and high efficiency for industry applications. In addition, the unexpected high-pulsed input current in the converter with coupled inductor is decreased. The operating principles and the steady-state analyses of the proposed converter are discussed in detail. Finally, a prototype circuit is implemented in the laboratory to verify the performance of the proposed converter.",
"title": ""
},
{
"docid": "5618f1415cace8bb8c4773a7e44a4e3f",
"text": "Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.",
"title": ""
},
{
"docid": "9a8b397bb95b9123a8d41342a850a456",
"text": "We present a novel task: the chronological classification of Hafez’s poems (ghazals). We compiled a bilingual corpus in digital form, with consistent idiosyncratic properties. We have used Hooman’s labeled ghazals in order to train automatic classifiers to classify the remaining ghazals. Our classification framework uses a Support Vector Machine (SVM) classifier with similarity features based on Latent Dirichlet Allocation (LDA). In our analysis of the results we use the LDA topics’ main terms that are passed on to a Principal Component Analysis (PCA) module.",
"title": ""
},
{
"docid": "c80795f19f899276d0fa03d9e6ca4651",
"text": "In this paper, we present a computer forensic method for detecting timestamp forgeries in the Windows NTFS file system. It is difficult to know precisely that the timestamps have been changed by only examining the timestamps of the file itself. If we can find the past timestamps before any changes to the file are made, this can act as evidence of file time forgery. The log records operate on files and leave large amounts of information in the $LogFile that can be used to reconstruct operations on the files and also used as forensic evidence. Log record with 0x07/0x07 opcode in the data part of Redo/Undo attribute has timestamps which contain past-and-present timestamps. The past-and-present timestamps can be decisive evidence to indicate timestamp forgery, as they contain when and how the timestamps were changed. We used file time change tools that can easily be found on Internet sites. The patterns of the timestamp change created by the tools are different compared to those of normal file operations. Seven file operations have ten timestamp change patterns in total by features of timestamp changes in the $STANDARD_INFORMATION attribute and the $FILE_NAME attribute. We made rule sets for detecting timestamp forgery based on using difference comparison between changes in timestamp patterns by the file time change tool and normal file operations. We apply the forensic rule sets for “.txt”, “.docx” and “.pdf” file types, and we show the effectiveness and validity of the proposed method. The importance of this research lies in the fact that we can find the past time in $LogFile, which gives decisive evidence of timestamp forgery. This makes the timestamp active evidence as opposed to simply being passive evidence. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "066a570a0bc668f4b39a56c2ced1d547",
"text": "Matrix Factorization (MF) based Collaborative Filtering (CF) have proved to be a highly accurate and scalable approach to recommender systems. In MF based CF, the learning rate is a key factor affecting the recommendation accuracy and convergence rate; however, this essential parameter is difficult to decide, since the recommender has to keep the balance between the recommendation accuracy and convergence rate. In this work, we choose the Regularized Matrix Factorization (RMF) based CF as the base model to discuss the effect of the learning rate in MF based CF, trying to deal with the dilemma of learning rate tuning through learning rate adaptation. First of all, we empirically validate the affection caused by the change of the learning rate on the recommendation performance. Subsequently, we integrate three sophisticated learning rate adapting strategies into RMF, including the Deterministic Step Size Adaption (DSSA), the Incremental Delta Bar Delta (IDBD), and the Stochastic Meta Decent (SMD). Thereafter, by analyzing the characteristics of the parameter update in RMF, we further propose the Gradient Cosine Adaption (GCA). The experimental results on five public large datasets demonstrate that by employing GCA, RMF could maintain good balance between accuracy and convergence rate, especially with small learning rate values. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "19da793660c1ab90b0da41842efa790b",
"text": "In this paper, we propose a method to optimally set the tap position of voltage regulation transformers in distribution systems. We cast the problem as a rank-constrained semidefinite program (SDP), in which the transformer tap ratios are captured by 1) introducing a secondary-side “virtual” bus per transformer, and 2) constraining the values that these virtual bus voltages can take according to the limits on the tap positions. Then, by relaxing the non-convex rank-1 constraint in the rank-constrained SDP formulation, one obtains a convex SDP problem. The tap positions are determined as the ratio between the primary-side bus voltage and the secondary-side virtual bus voltage that result from the optimal solution of the relaxed SDP, and then rounded to the nearest discrete tap values. To efficiently solve the relaxed SDP, we propose a distributed algorithm based on the alternating direction method of multipliers (ADMM). We present several case studies with single- and three-phase distribution systems to demonstrate the effectiveness of the distributed ADMM-based algorithm, and compare its results with centralized solution methods.",
"title": ""
}
] |
scidocsrr
|
c9fd29ee073fbe93a03a4d498e1ceea9
|
An adaptive ontology mapping approach with neural network based constraint satisfaction
|
[
{
"docid": "fd96e152e8579b0e8027ae7131b70fb1",
"text": "(Semi-)automatic mapping — also called (semi-)automatic alignment — of ontologies is a core task to achieve interoperability when two agents or services use different ontologies. In the existing literature, the focus ha s so far been on improving the quality of mapping results. We here consider QOM, Q uick Ontology Mapping, as a way to trade off between effectiveness (i.e. qu ality) and efficiency of the mapping generation algorithms. We show that QOM ha s lower run-time complexity than existing prominent approaches. Then, we show in experiments that this theoretical investigation translates into practical bene fits. While QOM gives up some of the possibilities for producing high-quality resu lts in favor of efficiency, our experiments show that this loss of quality is mar gin l.",
"title": ""
},
{
"docid": "1d1e89d6f1db290f01d296394d03a71b",
"text": "Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.",
"title": ""
}
] |
[
{
"docid": "c12c9fa98f672ec1bfde404d5bf54a35",
"text": "Speech recognition has become an important feature in smartphones in recent years. Different from traditional automatic speech recognition, the speech recognition on smartphones can take advantage of personalized language models to model the linguistic patterns and wording habits of a particular smartphone owner better. Owing to the popularity of social networks in recent years, personal texts and messages are no longer inaccessible. However, data sparseness is still an unsolved problem. In this paper, we propose a three-step adaptation approach to personalize recurrent neural network language models (RNNLMs). We believe that its capability to model word histories as distributed representations of arbitrary length can help mitigate the data sparseness problem. Furthermore, we also propose additional user-oriented features to empower the RNNLMs with stronger capabilities for personalization. The experiments on a Facebook dataset showed that the proposed method not only drastically reduced the model perplexity in preliminary experiments, but also moderately reduced the word error rate in n-best rescoring tests.",
"title": ""
},
{
"docid": "59f3c511765c52702b9047a688256532",
"text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.",
"title": ""
},
{
"docid": "e0c832f48352a5cb107a41b0907ad707",
"text": "In the same commercial ecosystem, although the different main bodies of logistics service such as transportation, suppliers and purchasers drive their interests differently, all the different stakeholders in the same business or consumers coexist mutually and share resources with each other. Based on this, this paper constructs a model of bonded logistics supply chain management based on the theory of commercial ecology, focusing on the logistics mode of transportation and multi-attribute behavior decision-making model based on the risk preference of the mode of transport of goods. After the weight is divided, this paper solves the model with ELECTRE-II algorithm and provides a scientific basis for decision-making of bonded logistics supply chain management through the decision model and ELECTRE-II algorithm.",
"title": ""
},
{
"docid": "3810d56b05b19a0950d7b04168d39d62",
"text": "This article presents a method for determining smooth and time-optimal path constrained trajectories for robotic manipulators and investigates the performance of these trajectories both through simulations and experiments. The desired smoothness of the trajectory is imposed through limits on the torque rates. The third derivative of the path parameter with respect to time, the pseudo-jerk, is the controlled input. The limits on the actuator torques translate into state-dependent limits on the pseudoacceleration. The time-optimal control objective is cast as an optimization problem by using cubic splines to parametrize the state space trajectory. The optimization problem is solved using the flexible tolerance method. The experimental results presented show that the planned smooth trajectories provide superior feasible time-optimal motion. Q 2000 John Wiley & Sons, Inc.",
"title": ""
},
{
"docid": "cc6ee50cc4bdb39dbe213066f1fc9f82",
"text": "Object detection when provided image-level labels instead of instance-level labels (i.e., bounding boxes) during training is an important problem in computer vision, since large scale image datasets with instance-level labels are extremely costly to obtain. In this paper, we address this challenging problem by developing an ExpectationMaximization (EM) based object detection method using deep convolutional neural networks (CNNs). Our method is applicable to both the weakly-supervised and semisupervised settings. Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly supervised setting, our method provides significant detection performance improvement over current state-of-the-art methods, (2) having access to a small number of strongly (instance-level) annotated images, our method can almost match the performace of the fully supervised Fast RCNN. We share our source code at https://github.com/",
"title": ""
},
{
"docid": "9c05452b964c67b8f79ce7dfda4a72e5",
"text": "The Internet is evolving rapidly toward the future Internet of Things (IoT) which will potentially connect billions or even trillions of edge devices which could generate huge amount of data at a very high speed and some of the applications may require very low latency. The traditional cloud infrastructure will run into a series of difficulties due to centralized computation, storage, and networking in a small number of datacenters, and due to the relative long distance between the edge devices and the remote datacenters. To tackle this challenge, edge cloud and edge computing seem to be a promising possibility which provides resources closer to the resource-poor edge IoT devices and potentially can nurture a new IoT innovation ecosystem. Such prospect is enabled by a series of emerging technologies, including network function virtualization and software defined networking. In this survey paper, we investigate the key rationale, the state-of-the-art efforts, the key enabling technologies and research topics, and typical IoT applications benefiting from edge cloud. We aim to draw an overall picture of both ongoing research efforts and future possible research directions through comprehensive discussions.",
"title": ""
},
{
"docid": "70ef4c1904d7d62a99e6c1dda53da095",
"text": "This position paper describes the initial research assumptions to improve music recommendations by including personality and emotional states. By including these psychological factors, we believe that the accuracy of the recommendation can be enhanced. We will give attention to how people use music to regulate their emotional states, and how this regulation is related to their personality. Furthermore, we will focus on how to acquire data from social media (i.e., microblogging sites such as Twitter) to predict the current emotional state of users. Finally, we will discuss how we plan to connect the correct emotionally laden music pieces to support the emotion regulation style of users.",
"title": ""
},
{
"docid": "b1313b777c940445eb540b1e12fa559e",
"text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.",
"title": ""
},
{
"docid": "7d3950bbd817ddc385014c9091c48b0d",
"text": "With the rapid development of ubiquitous computing and mobile communication technologies, the traditional business model will change drastically. As a logical extension of e-commerce and m-commerce, ubiquitous commerce (u-commerce) research and application are currently under transition with a history of numerous tried and failed solutions, and a future of promising but yet uncertain possibilities with potential new technology innovations. At this point of the development, we propose a suitable framework and organize the u-commerce research under the proposed classification scheme. The current situation outlined by the scheme has been addressed by exploratory and early phase studies. We hope the findings of this research will provide useful insights for anyone who is interested in u-commerce. The paper also provides some future directions for research.",
"title": ""
},
{
"docid": "e364db9141c85b1f260eb3a9c1d42c5b",
"text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557",
"title": ""
},
{
"docid": "b7c2c258fa5a94c20a4c99c16ec6ee88",
"text": "Many gender differences are thought to result from interactions between inborn factors and sociocognitive processes that occur after birth. There is controversy, however, over the causes of gender-typed preferences for the colors pink and blue, with some viewing these preferences as arising solely from sociocognitive processes of gender development. We evaluated preferences for gender-typed colors, and compared them to gender-typed toy and activity preferences in 126 toddlers on two occasions separated by 6-8 months (at Time 1, M = 29 months; range 20-40). Color preferences were assessed using color cards and neutral toys in gender-typed colors. Gender-typed toy and activity preferences were assessed using a parent-report questionnaire, the Preschool Activities Inventory. Color preferences were also assessed for the toddlers' parents using color cards. A gender difference in color preferences was present between 2 and 3 years of age and strengthened near the third birthday, at which time it was large (d > 1). In contrast to their parents, toddlers' gender-typed color preferences were stronger and unstable. Gender-typed color preferences also appeared to establish later and were less stable than gender-typed toy and activity preferences. Gender-typed color preferences were largely uncorrelated with gender-typed toy and activity preferences. These results suggest that the factors influencing gender-typed color preferences and gender-typed toy and activity preferences differ in some respects. Our findings suggest that sociocognitive influences and play with gender-typed toys that happen to be made in gender-typed colors contribute to toddlers' gender-typed color preferences.",
"title": ""
},
{
"docid": "fc1adf6f1efdb168bbc5febd29aa09c1",
"text": "Biomedical named entity recognition (NER) is a fundamental task in text mining of medical documents and has many applications. Deep learning based approaches to this task have been gaining increasing attention in recent years as their parameters can be learned endto-end without the need for hand-engineered features. However, these approaches rely on high-quality labeled data, which is expensive to obtain. To address this issue, we investigate how to use unlabeled text data to improve the performance of NER models. Specifically, we train a bidirectional language model (BiLM) on unlabeled data and transfer its weights to “pretrain” an NER model with the same architecture as the BiLM, which results in a better parameter initialization of the NER model. We evaluate our approach on four benchmark datasets for biomedical NER and show that it leads to a substantial improvement in the F1 scores compared with the state-of-the-art approaches. We also show that BiLM weight transfer leads to a faster model training and the pretrained model requires fewer training examples to achieve a particular F1 score.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "ad3c96a88a0cda684b466c14a9982d7b",
"text": "Gamification has been applied in software engineering contexts, and more recently in requirements engineering with the purpose of improving the motivation and engagement of people performing specific engineering tasks. But often an objective evaluation that the resulting gamified tasks successfully meet the intended goal is missing. On the other hand, current practices in designing gamified processes seem to rest on a try, test and learn approach, rather than on first principles design methods. Thus empirical evaluation should play an even more important role.We combined gamification and automated reasoning techniques to support collaborative requirements prioritization in software evolution. A first prototype has been evaluated in the context of three industrial use cases. To further investigate the impact of specific game elements, namely point-based elements, we performed a quasi-experiment comparing two versions of the tool, with and without pointsification. We present the results from these two empirical evaluations, and discuss lessons learned.",
"title": ""
},
{
"docid": "c3e371b0c13f431cbf9b9278a6d40ace",
"text": "Until today, most lecturers in universities are found still using the conventional methods of taking students' attendance either by calling out the student names or by passing around an attendance sheet for students to sign confirming their presence. In addition to the time-consuming issue, such method is also at higher risk of having students cheating about their attendance, especially in a large classroom. Therefore a method of taking attendance by employing an application running on the Android platform is proposed in this paper. This application, once installed can be used to download the students list from a designated web server. Based on the downloaded list of students, the device will then act like a scanner to scan each of the student cards one by one to confirm and verify the student's presence. The device's camera will be used as a sensor that will read the barcode printed on the students' cards. The updated attendance list is then uploaded to an online database and can also be saved as a file to be transferred to a PC later on. This system will help to eliminate the current problems, while also promoting a paperless environment at the same time. Since this application can be deployed on lecturers' own existing Android devices, no additional hardware cost is required.",
"title": ""
},
{
"docid": "de81c39f2a87229710009776323b8a3b",
"text": "Real-time bidding (RTB) is an important mechanism in online display advertising, where a proper bid for each page view plays an essential role for good marketing results. Budget constrained bidding is a typical scenario in RTB where the advertisers hope to maximize the total value of the winning impressions under a pre-set budget constraint. However, the optimal bidding strategy is hard to be derived due to the complexity and volatility of the auction environment. To address these challenges, in this paper, we formulate budget constrained bidding as a Markov Decision Process and propose a model-free reinforcement learning framework to resolve the optimization problem. Our analysis shows that the immediate reward from environment is misleading under a critical resource constraint. Therefore, we innovate a reward function design methodology for the reinforcement learning problems with constraints. Based on the new reward design, we employ a deep neural network to learn the appropriate reward so that the optimal policy can be learned effectively. Different from the prior model-based work, which suffers from the scalability problem, our framework is easy to be deployed in large-scale industrial applications. The experimental evaluations demonstrate the effectiveness of our framework on large-scale real datasets.",
"title": ""
},
{
"docid": "583b8cda1ef421011f7801bc35b82b8b",
"text": "This paper presents a natural language processing based automated system for NL text to OO modeling the user requirements and generating code in multi-languages. A new rule-based model is presented for analyzing the natural languages (NL) and extracting the relative and required information from the given software requirement notes by the user. User writes the requirements in simple English in a few paragraphs and the designed system incorporates NLP methods to analyze the given script. First the NL text is semantically analyzed to extract classes, objects and their respective, attributes, methods and associations. Then UML diagrams are generated on the bases of previously extracted information. The designed system also provides with the respective code automatically of the already generated diagrams. The designed system provides a quick and reliable way to generate UML diagrams to save the time and budget of both the user and system analyst.",
"title": ""
},
{
"docid": "7b66188e4e61ff4837ad53e29110c1f2",
"text": "Carrier aggregation (CA) is an inevitable technology to improve the data transfer rate with widening operation bandwidths, while the current frequency assignment of cellular bands is dispersed over. In the frequency division duplex (FDD) CA, acoustic multiplexers are one of the most important key devices. This paper describes the design technologies for the surface acoustic wave (SAW) multiplexers, such as filter topologies, matching network configurations, SAW characteristics and so on. In the case of narrow duplex gap bandwidth such as Band4 and Band25, the characteristics of SAW resonators such as unloaded quality factor (Q) and out-of band impedances act as extremely important role to realize the low insertion loss and the steep skirt characteristics. In order to solve these challenges, a new type high Q SAW resonator that is named IHP-SAW is introduced. The results of a novel quadplexer of Band4-Band25 using those technologies show enhanced performances.",
"title": ""
},
{
"docid": "f0b2d706ba338a3e60286e0ca002ab2c",
"text": "Spherical mobile robot has good static and dynamic stability, which allows the robot to face different kinds of obstacles and moving surfaces, but the lack of effective control methods has hindered its application and development. In this paper, we propose a direct approach to path planning of a 2DOFs (Degrees of Freedom) spherical robot based on Bellman’s “Dynamic Programming” (DP). While other path planning schemes rely on pre-planned optimal trajectories and/or feedback control techniques, in DP approach there is no need to design a control system because DP yields the optimal control inputs in closed loop or feedback form i.e. after completing DP table, for every state in the admissible region the optimal control inputs are known and the robot can move toward the final position. This enables the robot to function in semior even nonobservable environments. Results from many simulated experiments show that the proposed approach is capable of adopting an optimal path towards a predefined goal point from any given position/orientation in the admissible region. Keywords— spherical mobile robot; path planning; Dynamic Programming.",
"title": ""
},
{
"docid": "9f19f1e2f20f21e31eb3757df5de50b8",
"text": "Sources of complementary information are connected when we link the user accounts belonging to the same user across different domains or devices. The expanded information promotes the development of a wide range of applications, such as cross-domain prediction, cross-domain recommendation, and advertisement. Due to the great significance of user account linkage, there are increasing research works on this study. With the widespread popularization of GPS-enabled mobile devices, linking user accounts with location data has become an important and promising research topic. Being different from most existing studies in this domain that only focus on the effectiveness, we propose novel approaches to improve both effectiveness and efficiency of user account linkage. In this paper, a kernel density estimation (KDE) based method has been proposed to improve the accuracy by alleviating the data sparsity problem in measuring users' similarities. To improve the efficiency, we develop a grid-based structure to organize location data to prune the search space. The extensive experiments conducted on two real-world datasets demonstrate the superiority of the proposed approach in terms of both effectiveness and efficiency compared with the state-of-art methods.",
"title": ""
}
] |
scidocsrr
|
3b2624218688e5f3d458a2add5533b45
|
INTEGRATED BRAKING AND STEERING MODEL PREDICTIVE CONTROL APPROACH IN AUTONOMOUS VEHICLES
|
[
{
"docid": "7e6bc406394f5621b02acb9f0187667f",
"text": "A model predictive control (MPC) approach to active steering is presented for autonomous vehicle systems. The controller is designed to stabilize a vehicle along a desired path while rejecting wind gusts and fulfilling its physical constraints. Simulation results of a side wind rejection scenario and a double lane change maneuver on slippery surfaces show the benefits of the systematic control methodology used. A trade-off between the vehicle speed and the required preview on the desired path for vehicle stabilization is highlighted",
"title": ""
},
{
"docid": "01d77c925c62a7d26ff294231b449e95",
"text": "Al~tmd--We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and oo-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.",
"title": ""
}
] |
[
{
"docid": "7196847f0e1c333cf24ebac012c6d194",
"text": "Upper airway diseases including allergic rhinitis, chronic rhinosinusitis with or without polyps, and cystic fibrosis are characterized by substantially different inflammatory profiles. Traditionally, studies on the association of specific bacterial patterns with inflammatory profiles of diseases had been dependent on bacterial culturing. In the past 30 years, molecular biology methods have allowed bacterial culture free studies of microbial communities, revealing microbiota much more diverse than previously recognized including those found in the upper airway. At presence, the study of the pathophysiology of upper airway diseases is necessary to establish the relationship between the microbiome and inflammatory patterns to find their clinical reflections and also their possible causal relationships. Such investigations may elucidate the path to therapeutic approaches in correcting an imbalanced microbiome. In the review we summarized techniques used and the current knowledge on the microbiome of upper airway diseases, the limitations and pitfalls, and identified areas of interest for further research.",
"title": ""
},
{
"docid": "67fdf55cbd317cc46b871763772c777f",
"text": "Aims: The objective of this study was to assess the current state of continuous auditing in the state departments in Kenya and to adapt a framework to implement continuous auditing by the Public Sector Audit Organization. Study Design: Adoption of existing model and survey using questionnaires. Place and Duration of Study: Kenya, 2013. Methodology: Existing continuous auditing models were studied and the Integrated Continuous Auditing, Monitoring and Assurance Conceptual Model was adopted for use. The model was tested using data collected using questionnaires. Data was collected from 76 auditors in the Public Sector Audit Organization. A database system of a government Ministry was used to demonstrate how data can be obtained directly from a client system. Results: The study found the need for training in the skills required for continuous auditing and the acquisition of IT resources and infrastructure were necessary in realizing continuous auditing. Conclusion: The paper shows that Public Sector Audit Organization in Kenya, like institutions in other countries such as USA [8] and Australia [11], are preparing to advance from traditional audit to continuous auditing. The Integrated Continuous Auditing, Monitoring and Assurance Conceptual Model would offer a good starting point. Original Research Article British Journal of Economics, Management & Trade, 4(11): 1644-1654, 2014 1645",
"title": ""
},
{
"docid": "285f46045afe4ded9a2fcabfcfe9ef02",
"text": "Spin-transfer torque magnetic memory (STT-MRAM) has gained significant research interest due to its nonvolatility and zero standby leakage, near unlimited endurance, excellent integration density, acceptable read and write performance, and compatibility with CMOS process technology. However, several obstacles need to be overcome for STT-MRAM to become the universal memory technology. This paper first reviews the fundamentals of STT-MRAM and discusses key experimental breakthroughs. The state of the art in STT-MRAM is then discussed, beginning with the device design concepts and challenges. The corresponding bit-cell design solutions are also presented, followed by the STT-MRAM cache architectures suitable for on-chip applications.",
"title": ""
},
{
"docid": "eb4c84e4586a7046a9c39c81eb10bc0c",
"text": "Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction and evaluation of an inertial dead reckoning navigation system that provides real-time auditory guidance along mapped routes. Inertial dead reckoning is a navigation technique coupling step counting together with heading estimation to compute changes in position at each step. The research described here outlines the development and evaluation of a novel navigation system that utilizes information from the mapped route to limit the problematic error accumulation inherent in traditional dead reckoning approaches. The prototype system consists of a wireless inertial sensor unit, placed at the users' hip, which streams readings to a smartphone processing a navigation algorithm. Pilot human trials were conducted assessing system efficacy by studying route-following performance with blind and sighted subjects using the navigation system with real-time guidance, versus offline verbal directions.",
"title": ""
},
{
"docid": "ec58915a7fd321bcebc748a369153509",
"text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.",
"title": ""
},
{
"docid": "4d3baff85c302b35038f35297a8cdf90",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "ae73f7c35c34050b87d8bf2bee81b620",
"text": "D esigning a complex Web site so that it readily yields its information is a difficult task. The designer must anticipate the users' needs and structure the site accordingly. Yet users may have vastly differing views of the site's information, their needs may change over time, and their usage patterns may violate the designer's initial expectations. As a result, Web sites are all too often fossils cast in HTML, while user navigation is idiosyncratic and evolving. Understanding user needs requires understanding how users view the data available and how they actually use the site. For a complex site this can be difficult since user tests are expensive and time-consuming, and the site's server logs contain massive amounts of data. We propose a Web management assistant: a system that can process massive amounts of data about site usage Examining the potential use of automated adaptation to improve Web sites for visitors.",
"title": ""
},
{
"docid": "0d2f933b139f50ff9195118d9d1466aa",
"text": "Ambient Intelligence (AmI) and Smart Environments (SmE) are based on three foundations: ubiquitous computing, ubiquitous communication and intelligent adaptive interfaces [41]. This type of systems consists of a series of interconnected computing and sensing devices which surround the user pervasively in his environment and are invisible to him, providing a service that is dynamically adapted to the interaction context, so that users can naturally interact with the system and thus perceive it as intelligent. To ensure such a natural and intelligent interaction, it is necessary to provide an effective, easy, safe and transparent interaction between the user and the system. With this objective, as an attempt to enhance and ease human-to-computer interaction, in the last years there has been an increasing interest in simulating human-tohuman communication, employing the so-called multimodal dialogue systems [46]. These systems go beyond both the desktop metaphor and the traditional speech-only interfaces by incorporating several communication modalities, such as speech, gaze, gestures or facial expressions. Multimodal dialogue systems offer several advantages. Firstly, they can make use of automatic recognition techniques to sense the environment allowing the user to employ different input modalities, some of these technologies are automatic speech recognition [62], natural language processing [12], face location and tracking [77], gaze tracking [58], lipreading recognition [13], gesture recognition [39], and handwriting recognition [78].",
"title": ""
},
{
"docid": "848f8efe11785c00e8e8af737d173d44",
"text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.",
"title": ""
},
{
"docid": "cc752e1e36e689a0a78be8d5bd74a61a",
"text": "Classification is paramount for an optimal processing of tweets, albeit performance of classifiers is hindered by the need of large sets of training data to encompass the diversity of contents one can find on Twitter. In this paper, we introduce an inexpensive way of labeling large sets of tweets, which can be easily regenerated or updated when needed. We use human-edited web page directories to infer categories from URLs contained in tweets. By experimenting with a large set of more than 5 million tweets categorized accordingly, we show that our proposed model for tweet classification can achieve 82% in accuracy, performing only 12.2% worse than for web page classification.",
"title": ""
},
{
"docid": "b2cf33b05e93d1c15a32a54e8bc60bed",
"text": "Prevention of fraud and abuse has become a major concern of many organizations. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect. In this paper we present an efficient approach for fraud detection. In our approach we first maintain a log file for data which contain the content separated by space, position and also the frequency. Then we encrypt the data by substitution method and send to the receiver end. We also send the log file to the receiver end before proceed to the encryption which is also in the form of secret message. So the receiver can match the data according to the content, position and frequency, if there is any mismatch occurs, we can detect the fraud and does not accept the file.",
"title": ""
},
{
"docid": "0a4c81c9bb27c1231f6329587362eef7",
"text": "Traditional approaches to knowledge management are essentially limited to document management. However, much knowledge in organizations or communities resides in an informal social network and may be accessed only by asking the right people. This paper describes MARS, a multiagent referral system for knowledge management. MARS assigns a software agent to each user. The agents facilitate their users' interactions and help manage their personal social networks. Moreover, the agents cooperate with one another by giving and taking referrals to help their users find the right parties to contact for a specific knowledge need.",
"title": ""
},
{
"docid": "59a1088003576f2e75cdbedc24ae8bdf",
"text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.",
"title": ""
},
{
"docid": "4e7ce0c3696838f77bffd4ddeb1574a9",
"text": "Kidney segmentation in 3D CT images allows extracting useful information for nephrologists. For practical use in clinical routine, such an algorithm should be fast, automatic and robust to contrast-agent enhancement and fields of view. By combining and refining state-of-the-art techniques (random forests and template deformation), we demonstrate the possibility of building an algorithm that meets these requirements. Kidneys are localized with random forests following a coarse-to-fine strategy. Their initial positions detected with global contextual information are refined with a cascade of local regression forests. A classification forest is then used to obtain a probabilistic segmentation of both kidneys. The final segmentation is performed with an implicit template deformation algorithm driven by these kidney probability maps. Our method has been validated on a highly heterogeneous database of 233 CT scans from 89 patients. 80% of the kidneys were accurately detected and segmented (Dice coefficient > 0.90) in a few seconds per volume.",
"title": ""
},
{
"docid": "4b8639f6b4ed52a42e8af55af97d797e",
"text": "The next generation of AESA antennas will be challenged with the need for enabling a combination of different operating modes within the same antenna front end, including radar, communication (data links), and jamming (electronic warfare, EW). This leads to enhanced demands especially with regard to the usable RF bandwidth. One main step to overcome this is the use of disruptive semiconductor materials for RF MMICs. As the RF section of today's T/R modules for AESA applications is typically based on GaAs technology, GaN and SiGe BiCMOS will challenge or even replace it, here. This paper will describe the design of a GaN-based T/R module using all-European MMICs and covering the X-band from 8 to 12 GHz and its realisation at Airbus Defence and Space in Ulm. It will show some measurement results and give an impression on achieved performance. Further this paper shall describe potential next steps and give an outlook towards future developments.",
"title": ""
},
{
"docid": "b88ceafe9998671820291773be77cabc",
"text": "The aim of this study was to propose a set of network methods to measure the specific properties of a team. These metrics were organised at macro-analysis levels. The interactions between teammates were collected and then processed following the analysis levels herein announced. Overall, 577 offensive plays were analysed from five matches. The network density showed an ambiguous relationship among the team, mainly during the 2nd half. The mean values of density for all matches were 0.48 in the 1st half, 0.32 in the 2nd half and 0.34 for the whole match. The heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half. The centralisation values showed that there was no 'star topology'. The results suggest that each node (i.e., each player) had nearly the same connectivity, mainly in the 1st half. Nevertheless, the values increased in the 2nd half, showing a decreasing participation of all players at the same level. Briefly, these metrics showed that it is possible to identify how players connect with each other and the kind and strength of the connections between them. In summary, it may be concluded that network metrics can be a powerful tool to help coaches understand team's specific properties and support decision-making to improve the sports training process based on match analysis.",
"title": ""
},
{
"docid": "8a560246be1a816b232415fa237499f9",
"text": "Analytical SQL queries are a valuable source of information. Query log analysis can provide insight into the usage of datasets and uncover knowledge that cannot be inferred from source schemas or content alone. To unlock this potential, flexible mechanisms for meta-querying are required. Syntactic and semantic aspects of queries must be considered along with contextual information.\n We present an extensible framework for analyzing SQL query logs. Query logs are mapped to a multi-relational graph model and queried using domain-specific traversal expressions. To enable concise and expressive meta-querying, semantic analyses are conducted on normalized relational algebra trees with accompanying schema lineage graphs. Syntactic analyses can be conducted on corresponding query texts and abstract syntax trees. Additional metadata allows to inspect the temporal and social context of each query.\n In this demonstration, we show how query log analysis with our framework can support data source discovery and facilitate collaborative data science. The audience can explore an exemplary query log to locate queries relevant to a data analysis scenario, conduct graph analyses on the log and assemble a customized logmonitoring dashboard.",
"title": ""
},
{
"docid": "785377b7e375fd4bce96cbc92f1be63a",
"text": "In this paper, we propose a novel inverse reinforcement learning algorithm with leveraged Gaussian processes that can learn from both positive and negative demonstrations. While most existing inverse reinforcement learning (IRL) methods suffer from the lack of information near low reward regions, the proposed method alleviates this issue by incorporating (negative) demonstrations of what not to do. To mathematically formulate negative demonstrations, we introduce a novel generative model which can generate both positive and negative demonstrations using a parameter, called proficiency. Moreover, since we represent a reward function using a leveraged Gaussian process which can model a nonlinear function, the proposed method can effectively estimate the structure of a nonlinear reward function.",
"title": ""
},
{
"docid": "fe77670a01f93c3192c6760e46bbab46",
"text": "Group recommendation has attracted significant research efforts for its importance in benefiting a group of users. This paper investigates the Group Recommendation problem from a novel aspect, which tries to maximize the satisfaction of each group member while minimizing the unfairness between them. In this work, we present several semantics of the individual utility and propose two concepts of social welfare and fairness for modeling the overall utilities and the balance between group members. We formulate the problem as a multiple objective optimization problem and show that it is NP-Hard in different semantics. Given the multiple-objective nature of fairness-aware group recommendation problem, we provide an optimization framework for fairness-aware group recommendation from the perspective of Pareto Efficiency. We conduct extensive experiments on real-world datasets and evaluate our algorithm in terms of standard accuracy metrics. The results indicate that our algorithm achieves superior performances and considering fairness in group recommendation can enhance the recommendation accuracy.",
"title": ""
},
{
"docid": "eeafde1980fc144f1dcef6f84068bbd4",
"text": "The Mobile Computing in a Fieldwork Environment (MCFE) project aims to develop context-aware tools for hand-held computers that will support the authoring, presentation and management of field notes. The project deliverables will be designed to support student fieldwork exercises and our initial targets are fieldwork training in archaeology and the environmental sciences. Despite this specialised orientation, we anticipate that many features of these tools will prove to be equally well suited to use in research data collection in these and other disciplines.",
"title": ""
}
] |
scidocsrr
|
4aeee798c349f439ae4064217658d796
|
The neural bases of social pain: evidence for shared representations with physical pain.
|
[
{
"docid": "2aa8fc5844bae48f2eaef834909591c0",
"text": "It is well established that a lack of social support constitutes a major risk factor for morbidity and mortality, comparable to risk factors such as smoking, obesity, and high blood pressure. Although it has been hypothesized that social support may benefit health by reducing physiological reactivity to stressors, the mechanisms underlying this process remain unclear. Moreover, to date, no studies have investigated the neurocognitive mechanisms that translate experiences of social support into the health outcomes that follow. To investigate these processes, thirty participants completed three tasks in which daily social support, neurocognitive reactivity to a social stressor, and neuroendocrine responses to a social stressor were assessed. Individuals who interacted regularly with supportive individuals across a 10-day period showed diminished cortisol reactivity to a social stressor. Moreover, greater social support and diminished cortisol responses were associated with diminished activity in the dorsal anterior cingulate cortex (dACC) and Brodmann's area (BA) 8, regions previously associated with the distress of social separation. Lastly, individual differences in dACC and BA 8 reactivity mediated the relationship between high daily social support and low cortisol reactivity, such that supported individuals showed reduced neurocognitive reactivity to social stressors, which in turn was associated with reduced neuroendocrine stress responses. This study is the first to investigate the neural underpinnings of the social support-health relationship and provides evidence that social support may ultimately benefit health by diminishing neural and physiological reactivity to social stressors.",
"title": ""
}
] |
[
{
"docid": "6a3033fdd33ff1e63e9f8b2af67cf090",
"text": "This paper analyzes the application of admittance control to quadrocopters, focusing on physical human-vehicle interaction. Admittance control allows users to define the apparent inertia, damping, and stiffness of a robot, providing an intuitive way to physically interact with it. In this work, external forces acting on the quadrocopter are estimated from position and attitude information and then input to the admittance controller, which modifies the vehicle reference trajectory accordingly. The reference trajectory is tracked by an underlying position and attitude controller. The characteristics of the overall control scheme are investigated for the near-hover case. Experimental results complement the paper, demonstrating the suitability of the method for physical human-quadrocopter interaction.",
"title": ""
},
{
"docid": "b7aca26bc09bbc9376fefd1befec2b28",
"text": "Wearable sensor systems have been used in the ubiquitous computing community and elsewhere for applications such as activity and gesture recognition, health and wellness monitoring, and elder care. Although the power consumption of accelerometers has already been highly optimized, this work introduces a novel sensing approach which lowers the power requirement for motion sensing by orders of magnitude. We present an ultra-low-power method for passively sensing body motion using static electric fields by measuring the voltage at any single location on the body. We present the feasibility of using this sensing approach to infer the amount and type of body motion anywhere on the body and demonstrate an ultra-low-power motion detector used to wake up more power-hungry sensors. The sensing hardware consumes only 3.3 μW, and wake-up detection is done using an additional 3.3 μW (6.6 μW total).",
"title": ""
},
{
"docid": "590a44ab149b88e536e67622515fdd08",
"text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).",
"title": ""
},
{
"docid": "e4d27bd284fced38ed0bb527a5d378ac",
"text": "We investigate the use of deep bidirectional LSTMs for joint extraction of opinion entities and the IS-FROM and ISABOUT relations that connect them — the first such attempt using a deep learning approach. Perhaps surprisingly, we find that standard LSTMs are not competitive with a state-of-the-art CRF+ILP joint inference approach (Yang and Cardie, 2013) to opinion entities extraction, performing below even the standalone sequencetagging CRF. Incorporating sentence-level and a novel relation-level optimization, however, allows the LSTM to identify opinion relations and to perform within 1– 3% of the state-of-the-art joint model for opinion entities and the IS-FROM relation; and to perform as well as the state-of-theart for the IS-ABOUT relation — all without access to opinion lexicons, parsers and other preprocessing components required for the feature-rich CRF+ILP approach.",
"title": ""
},
{
"docid": "85edec9729653774ba40b311b2906684",
"text": "The exponential growth of online social networks has inspired us to tackle the problem of individual user attributes inference from the Big Data perspective. It is well known that various social media networks exhibit different aspects of user interactions, and thus represent users from diverse points of view. In this preliminary study, we make the first step towards solving the significant problem of personality profiling from multiple social networks. Specifically, we tackle the task of relationship prediction, which is closely related to our desired problem. Experimental results show that the incorporation of multi-source data helps to achieve better prediction performance as compared to single-source baselines. User profiling plays an increasingly important role in many application domains (Farseev, Samborskii, and Chua 2016). One of the critical components of user profiling is personality profiling (Pennebaker, Mehl, and Niederhoffer 2003), which seeks to identify one’s mental and emotional characteristics. Knowing these personal attributes can help to understand reasons behind one’s behaviour (Pennebaker, Mehl, and Niederhoffer 2003), select suitable individuals for particular tasks (Song et al. 2015), and motivate people to undertake new challenges in their life. Up to now, there have been several research attempts towards personality profiling. For example, some research groups have investigated this problem from the social science point of view (Pennebaker, Mehl, and Niederhoffer 2003). However, most of these works are descriptive in nature and rely on manual data collection procedures, which explains the absence of large-scale research in the field. With the recent growth of the Web, personality profiling can be approached by taking advantage of the abundance of data from online social networks. For example, such data has been utilized by several studies and evaluations devoted to automatic personality profiling, such as TwiSty (Verhoeven, Daelemans, and Plank 2016) or PAN (Rangel et al. 2015). Even though these studies made a significant progress towards automatic personality profiling, most of them were carried out on data from a single source (i.e. Twitter) or of a single modality (i.e. Text). Such personality profiling may lead to a sub-optimal performance (Farseev and Chua 2017). Taking into account that Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. most social networks users use more than one social network in their daily life (Farseev et al. 2015a), it is reasonable to utilize multiple data sources and modalities to solve personality profiling task. There are several personality categorization schemes adopted by the research community. One of the most widely embraced typologies is called Myers-Briggs Type Indicator (MBTI), that was proposed by Mayer and Briggs in 1985 and based on Carl Jung’s theory. The typology is designed to exhibit psychological preferences on how people perceive the world around them and distinguishes 16 personality types. Meanwhile, it was also discovered that social media services exceedingly affect and reflect the way their users communicate with the world and among themselves (Kaplan and Haenlein 2010). Based on these observations, it follows that MBTI categorization schema naturally fits social media research. Further, according to the previous studies (Farseev et al. 2015b; Farseev and Chua 2017) and our findings, social media users reveal their personal attributes differently in different social media platforms. For example, they may post photos in photo-sharing services, such as Instagram, or perform check-ins in location-based social networks, such as Foursquare. All this data describes users from the 360◦ view and, thus, plays an essential role in social media-based personality profiling. However, personality profiling from multiple social networks is associated with the following challenges: • Cross-source user identification. Often, it is not possible to identify multiple social networks accounts that belong to the same person, while some users use a limited number of social networks. • Ground-truth collection. Not all online resources with MBTI information about their users are approved by psychologists, while only a limited number of social networks posts is equipped with the references to trusted MBTI profiling resources. • Temporal changes of users’ personality. Users’ personality trends vary over time under the influence of different life aspects and external factors, which requires additional consideration during the data modeling process. • Data source fusion. Effective fusion of multi-view data from different sources in one model is a challenging problem (Song et al. 2015). Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)",
"title": ""
},
{
"docid": "155e53e97c23498a557f848ef52da2a7",
"text": "We propose a simultaneous extraction method for 12 organs from non-contrast three-dimensional abdominal CT images. The proposed method uses an abdominal cavity standardization process and atlas guided segmentation incorporating parameter estimation with the EM algorithm to deal with the large fluctuations in the feature distribution parameters between subjects. Segmentation is then performed using multiple level sets, which minimize the energy function that considers the hierarchy and exclusiveness between organs as well as uniformity of grey values in organs. To assess the performance of the proposed method, ten non-contrast 3D CT volumes were used. The accuracy of the feature distribution parameter estimation was slightly improved using the proposed EM method, resulting in better performance of the segmentation process. Nine organs out of twelve were statistically improved compared with the results without the proposed parameter estimation process. The proposed multiple level sets also boosted the performance of the segmentation by 7.2 points on average compared with the atlas guided segmentation. Nine out of twelve organs were confirmed to be statistically improved compared with the atlas guided method. The proposed method was statistically proved to have better performance in the segmentation of 3D CT volumes.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "19806d18233e149b091790d220e5181b",
"text": "In this work we propose Pixel Content Encoders (PCE), a lightweight image inpainting model, capable of generating novel content for large missing regions in images. Unlike previously presented convolutional neural network based models, our PCE model has an order of magnitude fewer trainable parameters. Moreover, by incorporating dilated convolutions we are able to preserve fine grained spatial information, achieving state-of-the-art performance on benchmark datasets of natural images and paintings. Besides image inpainting, we show that without changing the architecture, PCE can be used for image extrapolation, generating novel content beyond existing image boundaries.",
"title": ""
},
{
"docid": "8622a61c6cc571688fb2b6e232ba0920",
"text": "The increasing use of electronic forms of communication presents new opportunities in the study of mental health, including the ability to investigate the manifestations of psychiatric diseases unobtrusively and in the setting of patients' daily lives. A pilot study to explore the possible connections between bipolar affective disorder and mobile phone usage was conducted. In this study, participants were provided a mobile phone to use as their primary phone. This phone was loaded with a custom keyboard that collected metadata consisting of keypress entry time and accelerometer movement. Individual character data with the exceptions of the backspace key and space bar were not collected due to privacy concerns. We propose an end-to-end deep architecture based on late fusion, named DeepMood, to model the multi-view metadata for the prediction of mood scores. Experimental results show that 90.31% prediction accuracy on the depression score can be achieved based on session-level mobile phone typing dynamics which is typically less than one minute. It demonstrates the feasibility of using mobile phone metadata to infer mood disturbance and severity.",
"title": ""
},
{
"docid": "4d11fb2e8043e4f7cce009e0af65af86",
"text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a “siamese” deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by Cosine function. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Compared to existing researches, a more practical setting is studied in the experiments that is training and test on different datasets (cross dataset person re-identification). Both in “intra dataset” and “cross dataset” settings, the superiorities of the proposed method are illustrated on VIPeR and PRID.",
"title": ""
},
{
"docid": "bfc85b95287e4abc2308849294384d1e",
"text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.",
"title": ""
},
{
"docid": "e2b153aba78b2831a7f1ecc1b26e0fc9",
"text": "Recent gene expression profiling of breast cancer has identified specific subtypes with clinical, biologic, and therapeutic implications. The basal-like group of tumors is characterized by an expression signature similar to that of the basal/myoepithelial cells of the breast and is reported to have transcriptomic characteristics similar to those of tumors arising in BRCA1 germline mutation carriers. They are associated with aggressive behavior and poor prognosis, and typically do not express hormone receptors or HER-2 (\"triple-negative\" phenotype). Therefore, patients with basal-like cancers are unlikely to benefit from currently available targeted systemic therapy. Although basal-like tumors are characterized by distinctive morphologic, genetic, immunophenotypic, and clinical features, neither an accepted consensus on routine clinical identification and definition of this aggressive subtype of breast cancer nor a way of systematically classifying this complex group of tumors has been described. Different definitions are, therefore, likely to produce variable and contradictory results that may hamper consistent identification and development of treatment strategies for these tumors. In this review, we discuss definition, heterogeneity, morphologic spectrum, relation to BRCA1, and clinical significance of this important class of breast cancer.",
"title": ""
},
{
"docid": "39d3f1a5d40325bdc4bca9ee50241c9e",
"text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.",
"title": ""
},
{
"docid": "54def135e495c572d3a9de61492681a3",
"text": "Event logs or log files form an essential part of any network management and administration setup. While log files are invaluable to a network administrator, the vast amount of data they sometimes contain can be overwhelming and can sometimes hinder rather than facilitate the tasks of a network administrator. For this reason several event clustering algorithms for log files have been proposed, one of which is the event clustering algorithm proposed by Risto Vaarandi, on which his simple log file clustering tool (SLCT) is based. The aim of this work is to develop a visualization tool that can be used to view log files based on the clusters produced by SLCT. The proposed visualization tool, which is called LogView, utilizes treemaps to visualize the hierarchical structure of the clusters produced by SLCT. Our results based on different application log files show that LogView can ease the summarization of vast amount of data contained in the log files. This in turn can help to speed up the analysis of event data in order to detect any security issues on a given application.",
"title": ""
},
{
"docid": "d38ef842cfee6e6281e73a22ede06b4e",
"text": "Trauma-focused cognitive behavioral treatments are known to be effective for posttraumatic stress disorder (PTSD) in adults. However, evidence for effective treatments for older persons with PTSD, particularly elderly war trauma survivors, is scarce. In an open trial, 30 survivors of World War II aged 65 to 85 years (mean, 71.73 years; SD, 4.8; n = 17 women) with PTSD symptoms were treated with a Web-based, therapist-assisted cognitive-behavioral/narrative therapy for 6 weeks. Intent-to-treat analyses revealed a significant decrease in PTSD severity scores (Cohen's d = 0.43) and significant improvements on secondary clinical outcomes of quality of life, self-efficacy, and posttraumatic growth from pretreatment to posttreatment. All improvements were maintained at a 3-month follow-up. The attrition rate was low (13.3%), with participants who completed the trial reporting high working alliance and treatment satisfaction. Results of this study suggest that integrative testimonial therapy is a well accepted and potentially effective treatment for older war trauma survivors experiencing PTSD symptoms.",
"title": ""
},
{
"docid": "c07f7baed3648b190eca0f4753027b57",
"text": "Objective: An autoencoder-based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first study that proposes a combined framework to address the issue in a holistic fashion. Methods: For telemonitoring purposes, reconstruction techniques of biomedical signals are largely based on compressed sensing (CS); these are “designed” techniques where the reconstruction formulation is based on some “assumption” regarding the signal. In this study, we propose a new paradigm for reconstruction—the reconstruction is “learned,” using an autoencoder; it does not require any assumption regarding the signal as long as there is sufficiently large training data. But since the final goal is to analyze/classify the signal, the system can also learn a linear classification map that is added inside the autoencoder. The ensuing optimization problem is solved using the Split Bregman technique. Results: Experiments were carried out on reconstructing and classifying electrocardiogram (ECG) (arrhythmia classification) and EEG (seizure classification) signals. Conclusion: Our proposed tool is capable of operating in a semi-supervised fashion. We show that our proposed method is better in reconstruction and more than an order magnitude faster than CS based methods; it is capable of real-time operation. Our method also yields better results than recently proposed classification methods. Significance: This is the first study offering an alternative to CS-based reconstruction. It also shows that the representation learning approach can yield better results than traditional methods that use hand-crafted features for signal analysis.",
"title": ""
},
{
"docid": "c9394a05e7f18eece53d082e346605bc",
"text": "Machine learning (ML) is one of the intelligent methodologies that have shown promising results in the domains of classification and prediction. One of the expanding areas necessitating good predictive accuracy is sport prediction, due to the large monetary amounts involved in betting. In addition, club managers and owners are striving for classification models so that they can understand and formulate strategies needed to win matches. These models are based on numerous factors involved in the games, such as the results of historical matches, player performance indicators, and opposition information. This paper provides a critical analysis of the literature in ML, focusing on the application of Artificial Neural Network (ANN) to sport results prediction. In doing so, we identify the learning methodologies utilised, data sources, appropriate means of model evaluation, and specific challenges of predicting sport results. This then leads us to propose a novel sport prediction framework through which ML can be used as a learning strategy. Our research will hopefully be informative and of use to those performing future research in this application area. 2017 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "75fa00064a01ee22546622adc206f5a5",
"text": "Generative adversarial networks (GANs) have achieved significant success in generating realvalued data. However, the discrete nature of text hinders the application of GAN to textgeneration tasks. Instead of using the standard GAN objective, we propose to improve textgeneration GAN via a novel approach inspired by optimal transport. Specifically, we consider matching the latent feature distributions of real and synthetic sentences using a novel metric, termed the feature-mover’s distance (FMD). This formulation leads to a highly discriminative critic and easy-to-optimize objective, overcoming the mode-collapsing and brittle-training problems in existing methods.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "f3da7375193c6a5480646323772bcff6",
"text": "About the Book is textbook explores the di erent aspects of data mining from the fundamentals to the complex data types and their applications, capturing the wide diversity of problem domains for data mining issues. It goes beyond the traditional focus on data mining problems to introduce advanced data types such as text, time series, discrete sequences, spatial data, graph data, and social networks. Until now, no single book has addressed all these topics in a comprehensive and integrated way. e chapters of this book fall into one of three categories:",
"title": ""
}
] |
scidocsrr
|
ef2497f84a317607161437625943f4ab
|
LiveEye : Driver Attention Monitoring System
|
[
{
"docid": "b1c62a59a8ce3dd57ab2c00f7657cfef",
"text": "We developed a new method for estimation of vigilance level by using both EEG and EMG signals recorded during transition from wakefulness to sleep. Previous studies used only EEG signals for estimating the vigilance levels. In this study, it was aimed to estimate vigilance level by using both EEG and EMG signals for increasing the accuracy of the estimation rate. In our work, EEG and EMG signals were obtained from 30 subjects. In data preparation stage, EEG signals were separated to its subbands using wavelet transform for efficient discrimination, and chin EMG was used to verify and eliminate the movement artifacts. The changes in EEG and EMG were diagnosed while transition from wakefulness to sleep by using developed artificial neural network (ANN). Training and testing data sets consist of the subbanded components of EEG and power density of EMG signals were applied to the ANN for training and testing the system which gives three situations for the vigilance level of the subject: awake, drowsy, and sleep. The accuracy of estimation was about 98–99% while the accuracy of the previous study, which uses only EEG, was 95–96%.",
"title": ""
},
{
"docid": "b3fd58901706f7cb3ed653572e634c78",
"text": "This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.",
"title": ""
},
{
"docid": "46a55d7a3349f7228acb226ed7875dc9",
"text": "Previous research on driver drowsiness detection has focused primarily on lane deviation metrics and high levels of fatigue. The present research sought to develop a method for detecting driver drowsiness at more moderate levels of fatigue, well before accident risk is imminent. Eighty-seven different driver drowsiness detection metrics proposed in the literature were evaluated in two simulated shift work studies with high-fidelity simulator driving in a controlled laboratory environment. Twenty-nine participants were subjected to a night shift condition, which resulted in moderate levels of fatigue; 12 participants were in a day shift condition, which served as control. Ten simulated work days in the study design each included four 30-min driving sessions, during which participants drove a standardized scenario of rural highways. Ten straight and uneventful road segments in each driving session were designated to extract the 87 different driving metrics being evaluated. The dimensionality of the overall data set across all participants, all driving sessions and all road segments was reduced with principal component analysis, which revealed that there were two dominant dimensions: measures of steering wheel variability and measures of lateral lane position variability. The latter correlated most with an independent measure of fatigue, namely performance on a psychomotor vigilance test administered prior to each drive. We replicated our findings across eight curved road segments used for validation in each driving session. Furthermore, we showed that lateral lane position variability could be derived from measured changes in steering wheel angle through a transfer function, reflecting how steering wheel movements change vehicle heading in accordance with the forces acting on the vehicle and the road. This is important given that traditional video-based lane tracking technology is prone to data loss when lane markers are missing, when weather conditions are bad, or in darkness. Our research findings indicated that steering wheel variability provides a basis for developing a cost-effective and easy-to-install alternative technology for in-vehicle driver drowsiness detection at moderate levels of fatigue.",
"title": ""
},
{
"docid": "9096c5bfe44df6dc32641b8f5370d8d0",
"text": "This paper presents a nonintrusive prototype computer vision system for monitoring a driver's vigilance in real time. It is based on a hardware system for the real-time acquisition of a driver's images using an active IR illuminator and the software implementation for monitoring some visual behaviors that characterize a driver's level of vigilance. Six parameters are calculated: Percent eye closure (PERCLOS), eye closure duration, blink frequency, nodding frequency, face position, and fixed gaze. These parameters are combined using a fuzzy classifier to infer the level of inattentiveness of the driver. The use of multiple visual parameters and the fusion of these parameters yield a more robust and accurate inattention characterization than by using a single parameter. The system has been tested with different sequences recorded in night and day driving conditions in a motorway and with different users. Some experimental results and conclusions about the performance of the system are presented",
"title": ""
}
] |
[
{
"docid": "fc40a4af9411d0e9f494b13cbb916eac",
"text": "P (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities—while potentially important—has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. We find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded—At some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives—an important determinant of resource sharing in P2P networks—in network design.",
"title": ""
},
{
"docid": "4f1d8af5643fe2115a356938ce2f5953",
"text": "This paper is dedicated to the overview of the firefighting robots' control systems. The main goal of this paper is to show the variety of different firefighting robots and to analyze their advantages and imperfections.",
"title": ""
},
{
"docid": "894f5289293a72084647e07f8e7423f7",
"text": "Convolutional Neural Networks (CNNs) have been widely adopted for many imaging applications. For image aesthetics prediction, state-of-the-art algorithms train CNNs on a recently-published large-scale dataset, AVA. However, the distribution of the aesthetic scores on this dataset is extremely unbalanced, which limits the prediction capability of existing methods. We overcome such limitation by using weighted CNNs. We train a regression model that improves the prediction accuracy of the aesthetic scores over state-of-the-art algorithms. In addition, we propose a novel histogram prediction model that not only predicts the aesthetic score, but also estimates the difficulty of performing aesthetics assessment for an input image. We further show an image enhancement application where we obtain an aesthetically pleasing crop of an input image using our regression model.",
"title": ""
},
{
"docid": "952e529c5b2b5746ac726b1614139578",
"text": "We first observe a potential weakness of continuous vector representations of symbols in neural machine translation. That is, the continuous vector representation, or a word embedding vector, of a symbol encodes multiple dimensions of similarity, equivalent to encoding more than one meaning of the word. This has the consequence that the encoder and decoder recurrent networks in neural machine translation need to spend substantial amount of their capacity in disambiguating source and target words based on the context which is defined by a source sentence. Based on this observation, in this paper we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence. Additionally, we propose to represent special tokens (such as numbers, proper nouns and acronyms) with typed symbols to facilitate translating those words that are not well-suited to be translated via continuous vectors. The experiments on En-Fr and En-De reveal that the proposed approaches of contextualization and symbolization improves the translation quality of neural machine translation systems significantly.",
"title": ""
},
{
"docid": "7e7651261be84e2e05cde0ac9df69e6d",
"text": "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.",
"title": ""
},
{
"docid": "5d80ce0bffd5bc2016aac657669a98de",
"text": "Information and Communication Technology (ICT) has a great impact on social wellbeing, economic growth and national security in todays world. Generally, ICT includes computers, mobile communication devices and networks. ICT is also embraced by a group of people with malicious intent, also known as network intruders, cyber criminals, etc. Confronting these detrimental cyber activities is one of the international priorities and important research area. Anomaly detection is an important data analysis task which is useful for identifying the network intrusions. This paper presents an in-depth analysis of four major categories of anomaly detection techniques which include classification, statistical, information theory and clustering. The paper also discusses research challenges with the datasets used for network intrusion detection. & 2015 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "aa10bf4f41ca866c8d59d9d703321bd2",
"text": "This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.",
"title": ""
},
{
"docid": "d066c07fc64cf91f32be6ccf83761789",
"text": "This study tests the hypothesis that chewing gum leads to cognitive benefits through improved delivery of glucose to the brain, by comparing the cognitive performance effects of gum and glucose administered separately and together. Participants completed a battery of cognitive tests in a fully related 2 x 2 design, where one factor was Chewing Gum (gum vs. mint sweet) and the other factor was Glucose Co-administration (consuming a 25 g glucose drink vs. consuming water). For four tests (AVLT Immediate Recall, Digit Span, Spatial Span and Grammatical Transformation), beneficial effects of chewing and glucose were found, supporting the study hypothesis. However, on AVLT Delayed Recall, enhancement due to chewing gum was not paralleled by glucose enhancement, suggesting an alternative mechanism. The glucose delivery model is supported with respect to the cognitive domains: working memory, immediate episodic long-term memory and language-based attention and processing speed. However, some other mechanism is more likely to underlie the facilitatory effect of chewing gum on delayed episodic long-term memory.",
"title": ""
},
{
"docid": "ae937be677ca7c0714bde707816171ff",
"text": "The authors examined how time orientation and morningness-eveningness relate to 2 forms of procrastination: indecision and avoidant forms. Participants were 509 adults (M age = 49.78 years, SD = 6.14) who completed measures of time orientation, morningness-eveningness, decisional procrastination (i.e., indecision), and avoidant procrastination. Results showed that morningness was negatively related to avoidant procrastination but not decisional procrastination. Overall, the results indicated different temporal profiles for indecision and avoidant procrastinations. Avoidant procrastination related to low future time orientation and low morningness, whereas indecision related to both (a) high negative and high positive past orientations and (b) low present-hedonistic and low future time orientations. The authors inferred that distinct forms of procrastination seem different on the basis of dimensions of time.",
"title": ""
},
{
"docid": "07dceb2855985074b989dd5dc0b65808",
"text": "The significant increase in energy consumption and the rapid development of renewable energy, such as solar power and wind power, have brought huge challenges to energy security and the environment, which, in the meantime, stimulate the development of energy networks toward a more intelligent direction. Smart meters are the most fundamental components in the intelligent energy networks (IENs). In addition to measuring energy flows, smart energy meters can exchange the information on energy consumption and the status of energy networks between utility companies and consumers. Furthermore, smart energy meters can also be used to monitor and control home appliances and other devices according to the individual consumer's instruction. This paper systematically reviews the development and deployment of smart energy meters, including smart electricity meters, smart heat meters, and smart gas meters. By examining various functions and applications of smart energy meters, as well as associated benefits and costs, this paper provides insights and guidelines regarding the future development of smart meters.",
"title": ""
},
{
"docid": "034ace838fa1478ffbb9d25b405c4c21",
"text": "In recent years, DNA exoneration cases have shed light on the problem of false confessions and the wrongful convictions that result. Drawing on basic psychological principles and methods, an extensive body of research has focused on the psychology of confessions. This article describes the processes of interrogation by which police assess whether a suspect is lying or telling the truth and the techniques used to elicit confessions from those deemed deceptive. The problem of false confessions emphasizes personal and situational factors that put innocent people at risk in the interrogation room. Turning from the causes of false confessions to their consequences, research shows that confession evidence can bias juries, judges, lay witnesses, and forensic examiners. Finally, empirically based proposals for the reform of policy and practice include a call for the mandatory video recording of interrogations, blind testing in forensic crime labs, and use of confession experts in court.",
"title": ""
},
{
"docid": "4d11fb2e8043e4f7cce009e0af65af86",
"text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a “siamese” deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by Cosine function. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Compared to existing researches, a more practical setting is studied in the experiments that is training and test on different datasets (cross dataset person re-identification). Both in “intra dataset” and “cross dataset” settings, the superiorities of the proposed method are illustrated on VIPeR and PRID.",
"title": ""
},
{
"docid": "346ee5be7c74b28f7090c909861d66ac",
"text": "This paper introduces a new framework to construct fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we prerandomize the sensing signal by scrambling its sample locations or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the resulting transform coefficients to obtain the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable to that of completely random sensing matrices. Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework.",
"title": ""
},
{
"docid": "693dd8eb0370259c4ee5f8553de58443",
"text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.",
"title": ""
},
{
"docid": "1e3e56f6c57e39a73c7e0bc8cf6d306a",
"text": "BACKGROUND AND PURPOSE\nOnly a few investigators have described the involvement of the perineal muscles in the process of human erection. The aim of this research was to evaluate a re-education program for men with erection problems of different etiologies.\n\n\nSUBJECTS AND METHODS\nFifty-one patients with erectile dysfunction were treated with pelvic-floor exercises, biofeedback, and electrical stimulation.\n\n\nRESULTS\nThe results of the interventions can be summarized as follows: 24 patients (47%) regained a normal erection, 12 patients (24%) improved, and 6 patients (12%) did not make any progress. Nine patients (18%) did not complete the therapy. On the basis of several variables, a prediction equation was generated to determine the factors that would predict the effect of the interventions. The outcome was most favorable in men with venous-occlusive dysfunction.\n\n\nDISCUSSION AND CONCLUSION\nComparison of the results of the physical therapy protocol reported here with those obtained for other interventions reported in the literature shows that a pelvic-floor muscle program may be a noninvasive alternative for the treatment of patients with erectile dysfunction caused by venous occlusion.",
"title": ""
},
{
"docid": "1ffe0a1612214af88315a5a751d3bb4f",
"text": "In recent years, it is getting attention for renewable energy sources such as solar energy, fuel cells, batteries or ultracapacitors for distributed power generation systems. This paper proposes a general mathematical model of solar cells and Matlab/Simulink software based simulation of this model has been visually programmed. Proposed model can be used with other hybrid systems to develop solar cell simulations. Also, all equations are performed by using Matlab/Simulink programming.",
"title": ""
},
{
"docid": "d59d1ac7b3833ee1e60f7179a4a9af99",
"text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.",
"title": ""
},
{
"docid": "bb42d7baa5a16f8da8230a47f766183b",
"text": "It has been known that using different representations of either queries or documents, or different retrieval techniques retrieves different sets of documents. Recent work suggests that significant improvements in retrieval performance can be achieved by combining multiple representations or multiple retrieval techniques. In this paper we propose a simple method for retrieving different documents within a single query representation, a single document representation and a single retrieval technique. We classify the types of documents, and describe the properties of weighting schemes. Then, we explain that different properties of weighting schemes may retrieve different types of documents. Experimental results show that significant improvements can be obtained by combining the retrieval results from different properties of weighting schemes.",
"title": ""
},
{
"docid": "c10a6f61d7202184785cf68150ecce80",
"text": "This paper gives a comprehensive analysis of security with respect to NFC. It is not limited to a certain application of NFC, but it uses a systematic approach to analyze the various aspects of security whenever an NFC interface is used. The authors want to clear up many misconceptions about security and NFC in various applications. The paper lists the threats, which are applicable to NFC, and describes solutions to protect against these threats. All of this is given in the context of currently available NFC hardware, NFC applications and possible future developments of NFC.",
"title": ""
}
] |
scidocsrr
|
604e9ba750dbdf4b5681ca568a415a62
|
Compact Quarter-Wave Resonator and Its Applications to Miniaturized Diplexer and Triplexer
|
[
{
"docid": "e4aeb9f472b9e81691472c17da95e9df",
"text": "A novel high-gain active composite right/left-handed (CRLH) metamaterial leaky-wave antenna (LWA) is presented. This antenna, which is designed to operate at broadside, is constituted by passive CRLH leaky-wave sections interconnected by amplifiers, which regenerate the power progressively leaked out of the structure in the radiation process in order to increase the effective aperture of the antenna and thereby its gain. The gain is further enhanced by a matching regeneration effect induced by the quasi-unilateral nature of the amplifiers. Both the cases of quasi-uniform and binomial field distributions, corresponding to maximum directivity and minimum side-lobe level, respectively, have been described. An active LWA prototype is demonstrated in transmission mode with a gain enhancement of 8.9 dB compared to its passive counterpart. The proposed antenna can attain an arbitrarily high gain by simple increase of the length of the structure, without penalty in terms of return loss and without requiring a complicated feeding network like conventional array antennas",
"title": ""
},
{
"docid": "fe014ab328ff093deadca25eab9d965f",
"text": "Since conventional microstrip hairpin filter and diplexer are inherently formed by coupled-line resonators, spurious response and poor isolation performance are unavoidable. This letter presents a simple technique that is suitable for an inhomogeneous structure such as microstrip to cure such poor performances. The technique is based on the stepped impedance coupled-line resonator and is verified by the experimental results of the designed 0.9GHz/1.8GHz microstrip hairpin diplexer.",
"title": ""
}
] |
[
{
"docid": "1693a1bd7874e3df97e4656801d7d52a",
"text": "Many enterprise applications require the use of object-oriented middleware and message-oriented middleware in combination. Middleware-mediated transactions have been proposed as a transaction model to address reliability of such applications; they extend distributed object transactions to include messageoriented transactions. In this paper, we present three message queuing patterns that we have found useful for implementing middleware-mediated transactions. We discuss and show how the patterns can be applied to support guaranteed compensation in the engineering of transactional enterprise applications.",
"title": ""
},
{
"docid": "f805fe7a0c1ac413a254d6e48ceb00f8",
"text": "BACKGROUND\nAlcohol-induced blackouts, or memory loss for all or portions of events that occurred during a drinking episode, are reported by approximately 50% of drinkers and are associated with a wide range of negative consequences, including injury and death. As such, identifying the factors that contribute to and result from alcohol-induced blackouts is critical in developing effective prevention programs. Here, we provide an updated review (2010 to 2015) of clinical research focused on alcohol-induced blackouts, outline practical and clinical implications, and provide recommendations for future research.\n\n\nMETHODS\nA comprehensive, systematic literature review was conducted to examine all articles published between January 2010 through August 2015 that focused on vulnerabilities, consequences, and possible mechanisms for alcohol-induced blackouts.\n\n\nRESULTS\nTwenty-six studies reported on alcohol-induced blackouts. Fifteen studies examined prevalence and/or predictors of alcohol-induced blackouts. Six publications described the consequences of alcohol-induced blackouts, and 5 studies explored potential cognitive and neurobiological mechanisms underlying alcohol-induced blackouts.\n\n\nCONCLUSIONS\nRecent research on alcohol-induced blackouts suggests that individual differences, not just alcohol consumption, increase the likelihood of experiencing an alcohol-induced blackout, and the consequences of alcohol-induced blackouts extend beyond the consequences related to the drinking episode to include psychiatric symptoms and neurobiological abnormalities. Prospective studies and a standardized assessment of alcohol-induced blackouts are needed to fully characterize factors associated with alcohol-induced blackouts and to improve prevention strategies.",
"title": ""
},
{
"docid": "de50bb6d1f1d09ddc6a3da3de79d12d2",
"text": "This paper is to describe an intelligent motorized wheel chair for handicapped person using voice and touch screen technology. It enables a disabled person to move around independently using a touch screen and a voice recognition application which is interfaced with motors through microcontroller. When we want to change the direction, the touch screen sensor is modeled to direct the user to required destination using direction keys on the screen and that values are given to microcontroller. Depending on the direction selected on the touch screen, microcontroller controls the wheel chair directions. This can also be controlled through simple voice commands using voice controller. The speech recognition system is easy to use programmable speech recognition circuit that is the system to be trained the words (or vocal utterances) the user wants the circuit to recognize. The speed controller works by varying the average voltage sent to the motor. This is done by switching the motors supply on and off very quickly using PWM technique. The methodology adopted is based on grouping a microcontroller with a speech recognition system and touch screen. Keywords— Speech recognition system, Touch Screen sensor,",
"title": ""
},
{
"docid": "90b21e8edcb993f472fe516dff22ae84",
"text": "Urticaria is a kind of skin rash that sometimes caused by allergic reactions. Acute viral infection, stress, pressure, exercise and sunlight are some other causes of urticaria. However, chronic urticaria and angioedema could be either idiopathic or caused by autoimmune reaction. They last more than six weeks and could even persist for a very long time. It is thought that the level of C-reactive protein CRP increases and the level of Erythrocyte sedimentation rate (ESR) decreases in patients with chronic urticaria. Thirty four patients with chronic or recurrent urticaria were selected for the treatment with wet cupping. Six of them, because of having a history of recent infection/cold urticaria, were eliminated and the remaining 28 were chosen for this study. ESR and CRP were measured in these patients aged 21-59, comprising 12 females and 16 males, ranged from 5-24 mm/h for ESR with a median 11 mm/h and 3.3-31.2 mg/L with a median of 11.95 mg/L for CRP before and after phlebotomy (250-450mL) which was performed as a control for wet cupping therapy. Three weeks after phlebotomy, wet cupping was performed on the back of these patients between two shoulders and the levels of ESR and CRP were measured again three weeks after wet cupping. The changes were observed in the level of CRP and ESR after phlebotomy being negligible. However, the level of CRP with a median 11.95 before wet cupping dramatically dropped to 1.1 after wet cupping. The level ESR also with a median 11 before wet cupping rose to 15.5 after wet cupping therapy. The clear correlation between the urticaria/angioedema and the rise of CRP was observed as was anticipated. No recurrence has been observed on twenty five of these patients and three of them are still recovering from the lesions.",
"title": ""
},
{
"docid": "c697ce69b5ba77cce6dce93adaba7ee0",
"text": "Online social networks play a major role in modern societies, and they have shaped the way social relationships evolve. Link prediction in social networks has many potential applications such as recommending new items to users, friendship suggestion and discovering spurious connections. Many real social networks evolve the connections in multiple layers (e.g. multiple social networking platforms). In this article, we study the link prediction problem in multiplex networks. As an example, we consider a multiplex network of Twitter (as a microblogging service) and Foursquare (as a location-based social network). We consider social networks of the same users in these two platforms and develop a meta-path-based algorithm for predicting the links. The connectivity information of the two layers is used to predict the links in Foursquare network. Three classical classifiers (naive Bayes, support vector machines (SVM) and K-nearest neighbour) are used for the classification task. Although the networks are not highly correlated in the layers, our experiments show that including the cross-layer information significantly improves the prediction performance. The SVM classifier results in the best performance with an average accuracy of 89%.",
"title": ""
},
{
"docid": "b6e15d3931080de9a8f92d5b6e4c19e0",
"text": "A low-profile, electrically small antenna with omnidirectional vertically polarized radiation similar to a short monopole antenna is presented. The antenna features less than lambda/40 dimension in height and lambda/10 or smaller in lateral dimension. The antenna is matched to a 50 Omega coaxial line without the need for external matching. The geometry of the antenna is derived from a quarter-wave transmission line resonator fed at an appropriate location to maximize current through the short-circuited end. To improve radiation from the vertical short-circuited pin, the geometry is further modified through superposition of additional resonators placed in a parallel arrangement. The lateral dimension of the antenna is miniaturized by meandering and turning the microstrip lines into form of a multi-arm spiral. The meandering between the short-circuited end and the feed point also facilitates the impedance matching. Through this technique, spurious horizontally polarized radiation is also minimized and a radiation pattern similar to a short dipole is achieved. The antenna is designed, fabricated and measured. Parametric studies are performed to explore further size reduction and performance improvements. Based on the studies, a dual-band antenna with enhanced gain is realized. The measurements verify that the proposed fabricated antennas feature excellent impedance match, omnidirectional radiation in the horizontal plane and low levels of cross-polarization.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "609bbd3b066cf7a56d11ea545c0b0e71",
"text": "Subgingival margins are often required for biologic, mechanical, or esthetic reasons. Several investigations have demonstrated that their use is associated with adverse periodontal reactions, such as inflammation or recession. The purpose of this prospective randomized clinical study was to determine if two different subgingival margin designs influence the periodontal parameters and patient perception. Deep chamfer and feather-edge preparations were compared on 58 patients with 6 months follow-up. Statistically significant differences were present for bleeding on probing, gingival recession, and patient satisfaction. Feather-edge preparation was associated with increased bleeding on probing and deep chamfer with increased recession; improved patient comfort was registered with chamfer margin design. Subgingival margins are technique sensitive, especially when feather-edge design is selected. This margin design may facilitate soft tissue stability but can expose the patient to an increased risk of gingival inflammation.",
"title": ""
},
{
"docid": "c19396e701c117d6bae2f35ce8138f7c",
"text": "This paper presents the design results of the multi-band, multi-mode software-defined radar (SDR) system. The SDR platform consists of a multi-band RF modules of S, X, K-bands, and a multi-mode digital modules with a waveform generator for CW, Pulse, FMCW, and LFM Chirp as well as reconfigurable SDR-GUI software module for user interface. This platform can be used for various applications such as security monitoring, collision avoidance, traffic monitoring, and a radar imaging.",
"title": ""
},
{
"docid": "2bdf19ecf701eae1b3e9c3f9cf81387d",
"text": "Log file correlation is related to two distinct activities: Intrusion Detection and Network Forensics. It is more important than ever that these two disciplines work together in a mutualistic relationship in order to avoid Points of Failure. This paper, intended as a tutorial for those dealing with such issues, presents an overview of log analysis and correlation, with special emphasis on the tools and techniques for managing them within a network forensics context. In particular it will cover the most important parts of Log Analysis and correlation, starting from the Acquisition Process until the analysis.",
"title": ""
},
{
"docid": "e579e6761bc7fa50e76d0141fe848892",
"text": "Vehicular Ad-hoc Network (VANET) is an infrastructure less network. It provides enhancement in safety related techniques and comfort while driving. It enables vehicles to share information regarding safety and traffic analysis. The scope of VANET application has increased with the recent advances in technology and development of smart cities across the world. VANET provide a self aware system that has major impact in enhancement of traffic services and in reducing road accidents. Information shared in this system is time sensitive and requires robust and quick forming network connections. VANET, being a wireless ad hoc network, serves this purpose completely but is prone to security attacks. Highly dynamic connections, sensitive information sharing and time sensitivity of this network, make it an eye-catching field for attackers. This paper represents a literature survey on VANET with primary concern of the security issues and challenges with it. Features of VANET, architecture, security requisites, attacker type and possible attacks in VANET are considered in this survey paper.",
"title": ""
},
{
"docid": "0bc6bffc7ec4cbc1c31615e5143c9d52",
"text": "In this paper we describe and evaluate methods to perform ensemble prediction in neural machine translation (NMT). We compare two methods of ensemble set induction: sampling parameter initializations for an NMT system, which is a relatively established method in NMT (Sutskever et al., 2014), and NMT systems translating from different source languages into the same target language, i.e., multi-source ensembles, a method recently introduced by Firat et al. (2016). We are motivated by the observation that for different language pairs systems make different types of mistakes. We propose several methods with different degrees of parameterization to combine individual predictions of NMT systems so that they mutually compensate for each other’s mistakes and improve overall performance. We find that the biggest improvements can be obtained from a context-dependent weighting scheme for multi-source ensembles. This result offers stronger support for the linguistic motivation of using multi-source ensembles than previous approaches. Evaluation is carried out for German and French into English translation. The best multi-source ensemble method achieves an improvement of up to 2.2 BLEU points over the strongest singlesource ensemble baseline, and a 2 BLEU improvement over a multi-source ensemble baseline.",
"title": ""
},
{
"docid": "e869a183764dc93f778ac2ce2a7608c8",
"text": "We describe a new software framework for fast training of generalized linear models. The framework, named Snap Machine Learning (Snap ML), combines recent advances in machine learning systems and algorithms in a nested manner to reflect the hierarchical architecture of modern computing systems. We prove theoretically that such a hierarchical system can accelerate training in distributed environments where intra-node communication is cheaper than inter-node communication. Additionally, we provide a review of the implementation of Snap ML in terms of GPU acceleration, pipelining, communication patterns and software architecture, highlighting aspects that were critical for achieving high performance. We evaluate the performance of Snap ML in both single-node and multi-node environments, quantifying the benefit of the hierarchical scheme and the data streaming functionality, and comparing with other widely-used machine learning software frameworks. Finally, we present a logistic regression benchmark on the Criteo Terabyte Click Logs dataset and show that Snap ML achieves the same test loss an order of magnitude faster than any of the previously reported results, including those obtained using TensorFlow and scikit-learn.",
"title": ""
},
{
"docid": "4dd59c743d7f4ae1f6a05f20a4bd6935",
"text": "Self-attentive feed-forward sequence models have been shown to achieve impressive results on sequence modeling tasks including machine translation [31], image generation [30] and constituency parsing [18], thereby presenting a compelling alternative to recurrent neural networks (RNNs) which has remained the de-facto standard architecture for many sequence modeling problems to date. Despite these successes, however, feed-forward sequence models like the Transformer [31] fail to generalize in many tasks that recurrent models handle with ease (e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time [28]). Moreover, and in contrast to RNNs, the Transformer model is not computationally universal, limiting its theoretical expressivity. In this paper we propose the Universal Transformer which addresses these practical and theoretical shortcomings and we show that it leads to improved performance on several tasks. Instead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation, we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation, and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.",
"title": ""
},
{
"docid": "af0a1a8af70423ec09e0bb1e47f2e3f6",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants.",
"title": ""
},
{
"docid": "23959bc6c0075decbd879bf29383589b",
"text": "Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-á-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols.",
"title": ""
},
{
"docid": "dfaccd0aa36efbafe5cb1101f9d4f93e",
"text": "At present, the modern manufacturing and management concepts such as digitalization, networking and intellectualization have been popularized in the industry, and the degree of industrial automation and information has been improved unprecedentedly. Industrial products are everywhere in the world. They are involved in design, manufacture, operation, maintenance and recycling. The whole life cycle involves huge amounts of data. Improving data quality is very important for data mining and data analysis. To solve the problem of data inconsistency is a very important part of improving data quality.",
"title": ""
},
{
"docid": "f93dac471e3d7fa79c740b35fbde0558",
"text": "In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsu-pervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.",
"title": ""
},
{
"docid": "2080ae8b4318a29649ebc0c45019083e",
"text": "For reliable driving assistance or automated driving, pedestrian detection must be robust and performed in real time. In pedestrian detection, a linear support vector machine (linSVM) is popularly used as a classifier but exhibits degraded performance due to the multipostures of pedestrians. Kernel SVM (KSVM) could be a better choice for pedestrian detection, but it has a disadvantage in that it requires too much more computation than linSVM. In this paper, the cascade implementation of the additive KSVM (AKSVM) is proposed for the application of pedestrian detection. AKSVM avoids kernel expansion by using lookup tables, and it is implemented in cascade form, thereby speeding up pedestrian detection. The cascade implementation is trained by a genetic algorithm such that the computation time is minimized, whereas the detection accuracy is maximized. In experiments, the proposed method is tested with the INRIA dataset. The experimental results indicate that the proposed method has better detection accuracy and reduced computation time compared with conventional methods.",
"title": ""
},
{
"docid": "714b94a5a88874ea3546fca13c702c26",
"text": "We investigate the problem of producing structured graph representations of visual scenes. Our work analyzes the role of motifs: regularly appearing substructures in scene graphs. We present new quantitative insights on such repeated structures in the Visual Genome dataset. Our analysis shows that object labels are highly predictive of relation labels but not vice-versa. We also find that there are recurring patterns even in larger subgraphs: more than 50% of graphs contain motifs involving at least two relations. Our analysis motivates a new baseline: given object detections, predict the most frequent relation between object pairs with the given labels, as seen in the training set. This baseline improves on the previous state-of-the-art by an average of 3.6% relative improvement across evaluation settings. We then introduce Stacked Motif Networks, a new architecture designed to capture higher order motifs in scene graphs that further improves over our strong baseline by an average 7.1% relative gain. Our code is available at github.com/rowanz/neural-motifs.",
"title": ""
}
] |
scidocsrr
|
9632143dff7a9b0ff776d5ce7a1d8b4f
|
Acing the IOC Game: Toward Automatic Discovery and Analysis of Open-Source Cyber Threat Intelligence
|
[
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
}
] |
[
{
"docid": "deba3a2c56f32f15aa0b41e9ff16d2e3",
"text": "This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women's response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences.",
"title": ""
},
{
"docid": "06e3d228e9fac29dab7180e56f087b45",
"text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.",
"title": ""
},
{
"docid": "853703c46af2dda7735e7783b56cba44",
"text": "PURPOSE\nWe compared the efficacy and safety of sodium hyaluronate (SH) and carboxymethylcellulose (CMC) in treating mild to moderate dry eye.\n\n\nMETHODS\nSixty-seven patients with mild to moderate dry eye were enrolled in this prospective, randomized, blinded study. They were treated 6 times a day with preservative-free unit dose formula eyedrops containing 0.1% SH or 0.5% CMC for 8 weeks. Corneal and conjunctival staining with fluorescein, tear film breakup time, subjective symptoms, and adverse reactions were assessed at baseline, 4 weeks, and 8 weeks after treatment initiation.\n\n\nRESULTS\nThirty-two patients were randomly assigned to the SH group and 33 were randomly assigned to the CMC group. Both the SH and CMC groups showed statistically significant improvements in corneal and conjunctival staining sum scores, tear film breakup time, and dry eye symptom score at 4 and 8 weeks after treatment initiation. However, there were no statistically significant differences in any of the indices between the 2 treatment groups. There were no significant adverse reactions observed during follow-up.\n\n\nCONCLUSIONS\nThe efficacies of SH and CMC were equivalent in treating mild to moderate dry eye. SH and CMC preservative-free artificial tear formulations appropriately manage dry eye sign and symptoms and show safety and efficacy when frequently administered in a unit dose formula.",
"title": ""
},
{
"docid": "8f70026ff59ed1ae54ab5b6dadd2a3da",
"text": "Exoskeleton suit is a kind of human-machine robot, which combines the humans intelligence with the powerful energy of mechanism. It can help people to carry heavy load, walking on kinds of terrains and have a broadly apply area. Though many exoskeleton suits has been developed, there need many complex sensors between the pilot and the exoskeleton system, which decrease the comfort of the pilot. Sensitivity amplification control (SAC) is a method applied in exoskeleton system without any sensors between the pilot and the exoskeleton. In this paper simulation research was made to verify the feasibility of SAC include a simple 1-dof model and a swing phase model of 3-dof. A PID controller was taken to describe the human-machine interface model. Simulation results show the human only need to exert a scale-down version torque compared with the actuator and decrease the power consumes of the pilot.",
"title": ""
},
{
"docid": "2b00c07248c468447e12aff67c52a192",
"text": "Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.",
"title": ""
},
{
"docid": "296120e8ac6a03c8079fe343058f26ff",
"text": "OBJECTIVE\nDegenerative ataxias in children present a rare condition where effective treatments are lacking. Intensive coordinative training based on physiotherapeutic exercises improves degenerative ataxia in adults, but such exercises have drawbacks for children, often including a lack of motivation for high-frequent physiotherapy. Recently developed whole-body controlled video game technology might present a novel treatment strategy for highly interactive and motivational coordinative training for children with degenerative ataxias.\n\n\nMETHODS\nWe examined the effectiveness of an 8-week coordinative training for 10 children with progressive spinocerebellar ataxia. Training was based on 3 Microsoft Xbox Kinect video games particularly suitable to exercise whole-body coordination and dynamic balance. Training was started with a laboratory-based 2-week training phase and followed by 6 weeks training in children's home environment. Rater-blinded assessments were performed 2 weeks before laboratory-based training, immediately prior to and after the laboratory-based training period, as well as after home training. These assessments allowed for an intraindividual control design, where performance changes with and without training were compared.\n\n\nRESULTS\nAtaxia symptoms were significantly reduced (decrease in Scale for the Assessment and Rating of Ataxia score, p = 0.0078) and balance capacities improved (dynamic gait index, p = 0.04) after intervention. Quantitative movement analysis revealed improvements in gait (lateral sway: p = 0.01; step length variability: p = 0.01) and in goal-directed leg placement (p = 0.03).\n\n\nCONCLUSIONS\nDespite progressive cerebellar degeneration, children are able to improve motor performance by intensive coordination training. Directed training of whole-body controlled video games might present a highly motivational, cost-efficient, and home-based rehabilitation strategy to train dynamic balance and interaction with dynamic environments in a large variety of young-onset neurologic conditions.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.",
"title": ""
},
{
"docid": "b8de76afab03ad223fb4713b214e3fec",
"text": "Companies facing new requirements for governance are scrambling to buttress financial-reporting systems, overhaul board structures--whatever it takes to comply. But there are limits to how much good governance can be imposed from the outside. Boards know what they ought to be: seats of challenge and inquiry that add value without meddling and make CEOs more effective but not all-powerful. A board can reach that goal only if it functions as a high-performance team, one that is competent, coordinated, collegial, and focused on an unambiguous goal. Such entities don't just evolve; they must be constructed to an exacting blueprint--what the author calls board building. In this article, Nadler offers an agenda and a set of tools that boards can use to define and achieve their objectives. It's important for a board to conduct regular self-assessments and to pay attention to the results of those analyses. As a first step, the directors and the CEO should agree on which of the following common board models best fits the company: passive, certifying, engaged, intervening, or operating. The directors and the CEO should then analyze which business tasks are most important and allot sufficient time and resources to them. Next, the board should take inventory of each director's strengths to ensure that the group as a whole possesses the skills necessary to do its work. Directors must exert more influence over meeting agendas and make sure they have the right information at the right time and in the right format to perform their duties. Finally, the board needs to foster an engaged culture characterized by candor and a willingness to challenge. An ambitious board-building process, devised and endorsed both by directors and by management, can potentially turn a good board into a great one.",
"title": ""
},
{
"docid": "d6587e4d37742c25355296da3a718c41",
"text": "Vehicular Ad hoc Networks (VANETs) are classified as an application of Mobile Ad-hoc Networks (MANETs) that has the potential in improving road safety and providing Intelligent Transportation System (ITS). Vehicular communication system facilitates communication devices for exchange of information among vehicles and vehicles and Road Side Units (RSUs).The era of vehicular adhoc networks is now gaining attention and momentum. Researchers and developers have built VANET simulation tools to allow the study and evaluation of various routing protocols, various emergency warning protocols and others VANET applications. Simulation of VANET routing protocols and its applications is fundamentally different from MANETs simulation because in VANETs, vehicular environment impose new issues and requirements, such as multi-path fading, roadside obstacles, trip models, traffic flow models, traffic lights, traffic congestion, vehicular speed and mobility, drivers behaviour etc. This paper presents a comparative study of various publicly available VANET simulation tools. Currently, there are network simulators, VANET mobility generators and VANET simulators are publicly available. In particular, this paper contrast their software characteristics, graphical user interface, accuracy of simulation, ease of use, popularity, input requirements, output visualization capabilities etc. Keywords-Ad-hoc network, ITS (Intelligent Transportation System), MANET, Simulation, VANET.",
"title": ""
},
{
"docid": "0a4f5a46948310cfce44a8749cd479df",
"text": "This paper presents a tutorial introduction to contemporary cryptography. The basic information theoretic and computational properties of classical and modern cryptographic systems are presented, followed by cryptanalytic examination of several important systems and an examination of the application of cryptography to the security of timesharing systems and computer networks. The paper concludes with a guide to the cryptographic literature.",
"title": ""
},
{
"docid": "be017adea5e5c5f183fd35ac2ff6b614",
"text": "In nationally representative yearly surveys of United States 8th, 10th, and 12th graders 1991-2016 (N = 1.1 million), psychological well-being (measured by self-esteem, life satisfaction, and happiness) suddenly decreased after 2012. Adolescents who spent more time on electronic communication and screens (e.g., social media, the Internet, texting, gaming) and less time on nonscreen activities (e.g., in-person social interaction, sports/exercise, homework, attending religious services) had lower psychological well-being. Adolescents spending a small amount of time on electronic communication were the happiest. Psychological well-being was lower in years when adolescents spent more time on screens and higher in years when they spent more time on nonscreen activities, with changes in activities generally preceding declines in well-being. Cyclical economic indicators such as unemployment were not significantly correlated with well-being, suggesting that the Great Recession was not the cause of the decrease in psychological well-being, which may instead be at least partially due to the rapid adoption of smartphones and the subsequent shift in adolescents' time use. (PsycINFO Database Record",
"title": ""
},
{
"docid": "dffe5305558e10a0ceba499f3a01f4d8",
"text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.",
"title": ""
},
{
"docid": "1a9086eb63bffa5a36fde268fb74c7a6",
"text": "This brief presents a simple reference circuit with channel-length modulation compensation to generate a reference voltage of 221 mV using subthreshold of MOSFETs at supply voltage of 0.85 V with power consumption of 3.3 muW at room temperature using TSMC 0.18-mum technology. The proposed circuit occupied in less than 0.0238 mm 2 achieves the reference voltage variation of 2 mV/V for supply voltage from 0.9 to 2.5V and about 6 mV of temperature variation in the range from -20degC to 120 degC. The agreement of simulation and measurement data is demonstrated",
"title": ""
},
{
"docid": "226582e50ef3e91b8325b140efea6a8e",
"text": "This special issue focuses on the theme of sensory processing dysfunction in schizophrenia. For more than 50 years, from approximately the time of Bleuler until the early 1960s, sensory function was considered one of the few preserved functions in schizophrenia (Javitt1). Fortunately, the last several decades have brought a renewed and accelerating interest in this topic. The articles included in the issue range from those addressing fundamental bases of sensory dysfunction (Brenner, Yoon, and Turetsky) to those that examine how elementary deficits in sensory processing affect the sensory experience of individuals with schizophrenia (Butler, Kantrowitz, and Coleman) to the question of how sensory-based treatments may lead to improvement in remediation strategies (Adcock). Although addressing only a small portion of the current complex and burgeoning literature on sensory impairments across modalities, the present articles provide a cross-section of the issues currently under investigation. These studies also underscore the severe challenges that individuals with schizophrenia face when trying to decode the complex world around them.",
"title": ""
},
{
"docid": "76a7b28b225781bc15b887569cd3181b",
"text": "Mangroves are defined by the presence of trees that mainly occur in the intertidal zone, between land and sea, in the (sub) tropics. The intertidal zone is characterised by highly variable environmental factors, such as temperature, sedimentation and tidal currents. The aerial roots of mangroves partly stabilise this environment and provide a substratum on which many species of plants and animals live. Above the water, the mangrove trees and canopy provide important habitat for a wide range of species. These include birds, insects, mammals and reptiles. Below the water, the mangrove roots are overgrown by epibionts such as tunicates, sponges, algae, and bivalves. The soft substratum in the mangroves forms habitat for various infaunal and epifaunal species, while the space between roots provides shelter and food for motile fauna such as prawns, crabs and fishes. Mangrove litter is transformed into detritus, which partly supports the mangrove food web. Plankton, epiphytic algae and microphytobenthos also form an important basis for the mangrove food web. Due to the high abundance of food and shelter, and low predation pressure, mangroves form an ideal habitat for a variety of animal species, during part or all of their life cycles. As such, mangroves may function as nursery habitats for (commercially important) crab, prawn and fish species, and support offshore fish populations and fisheries. Evidence for linkages between mangroves and offshore habitats by animal migrations is still scarce, but highly needed for management and conservation purposes. Here, we firstly reviewed the habitat function of mangroves by common taxa of terrestrial and marine animals. Secondly, we reviewed the literature with regard to the degree of interlinkage between mangroves and adjacent habitats, a research area which has received increasing attention in the last decade. Finally, we reviewed current insights into the degree to which mangrove litter fuels the mangrove food web, since this has been the subject of longstanding debate. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c35608f769b7844adc482ff9f7a79278",
"text": "Video annotation is an effective way to facilitate content-based analysis for videos. Automatic machine learning methods are commonly used to accomplish this task. Among these, active learning is one of the most effective methods, especially when the training data cost a great deal to obtain. One of the most challenging problems in active learning is the sample selection. Various sampling strategies can be used, such as uncertainty, density, and diversity, but it is difficult to strike a balance among them. In this paper, we provide a visualization-based batch mode sampling method to handle such a problem. An iso-contour-based scatterplot is used to provide intuitive clues for the representativeness and informativeness of samples and assist users in sample selection. A semisupervised metric learning method is incorporated to help generate an effective scatterplot reflecting the high-level semantic similarity for visual sample selection. Moreover, both quantitative and qualitative evaluations are provided to show that the visualization-based method can effectively enhance sample selection in active learning.",
"title": ""
},
{
"docid": "5bdbf3fa515da2c49c99740f3f6b420e",
"text": "Bearing failure is one of the foremost causes of breakdowns in rotating machinery and such failure can be catastrophic, resulting in costly downtime. One of the key issues in bearing prognostics is to detect the defect at its incipient stage and alert the operator before it develops into a catastrophic failure. Signal de-noising and extraction of the weak signature are crucial to bearing prognostics since the inherent deficiency of the measuring mechanism often introduces a great amount of noise to the signal. In addition, the signature of a defective bearing is spread across a wide frequency band and hence can easily become masked by noise and low frequency effects. As a result, robust methods are needed to provide more evident information for bearing performance assessment and prognostics. This paper introduces enhanced and robust prognostic methods for rolling element bearing including a wavelet filter based method for weak signature enhancement for fault identification and Self Organizing Map (SOM) based method for performance degradation assessment. The experimental results demonstrate that the bearing defects can be detected at an early stage of development when both optimal wavelet filter and SOM method are used. q 2004 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "638373cda30d5f08976a5d796283ed3e",
"text": "A coax-feed wideband dual-polarized patch antenna with low cross polarization and high port isolation is presented in this letter. The proposed antenna contains two pairs of T-shaped slots on the two bowtie-shaped patches separately. This structure changes the path of the current and keeps the cross polarization under -40 dB. By introducing two short pins, the isolation between the two ports remains more than 38 dB in the whole bandwidth with the front-to-back ratio better than 19 dB. Moreover, the proposed antenna achieving a 10-dB return loss bandwidth of 1.70-2.73 GHz has a compact structure, thus making it easy to be extended to form an array, which can be used as a base station antenna for PCS, UMTS, and WLAN/WiMAX applications.",
"title": ""
},
{
"docid": "a2d38448513e69f514f88eb852e76292",
"text": "It is cost-efficient for a tenant with a limited budget to establish a virtual MapReduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant's perspective. JoSS provides not only job-level scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different MapReduce-workload scenarios and provide the best job performance among all tested algorithms.",
"title": ""
},
{
"docid": "f590eac54deff0c65732cf9922db3b93",
"text": "Lichen planus (LP) is a common chronic inflammatory condition that can affect skin and mucous membranes, including the oral mucosa. Because of the anatomic, physiologic and functional peculiarities of the oral cavity, the oral variant of LP (OLP) requires specific evaluations in terms of diagnosis and management. In this comprehensive review, we discuss the current developments in the understanding of the etiopathogenesis, clinical-pathologic presentation, and treatment of OLP, and provide follow-up recommendations informed by recent data on the malignant potential of the disease as well as health economics evaluations.",
"title": ""
},
{
"docid": "0c509f98c65a48c31d32c0c510b4c13f",
"text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.",
"title": ""
}
] |
scidocsrr
|
9f684a8271b2b7257280b3b104d49f50
|
New global optimization methods for ship design problems
|
[
{
"docid": "51b36c7d660d723fad2ee1911ab44295",
"text": "This paper presents an overview of our most recent results concerning the Particle Swarm Optimization (PSO) method. Techniques for the alleviation of local minima, and for detecting multiple minimizers are described. Moreover, results on the ability of the PSO in tackling Multiobjective, Minimax, Integer Programming and ℓ1 errors-in-variables problems, as well as problems in noisy and continuously changing environments, are reported. Finally, a Composite PSO, in which the heuristic parameters of PSO are controlled by a Differential Evolution algorithm during the optimization, is described, and results for many well-known and widely used test functions are given.",
"title": ""
},
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] |
[
{
"docid": "956d46e88c40f772decafcbf1dc8e912",
"text": "Optogenetics is a powerful neuromodulatory tool with many unique advantages to explore functions of neuronal circuits in physiology and diseases. Yet, interpretation of cellular and behavioral responses following in vivo optogenetic manipulation of brain activities in experimental animals often necessitates identification of photoactivated neurons with high spatial resolution. Although tracing expression of immediate early genes (IEGs) provides a convenient approach, neuronal activation is not always followed by specific induction of widely used neuronal activity markers like c-fos, Egr1 and Arc. In this study we performed unilateral optogenetic stimulation of the striatum in freely moving transgenic mice that expressed a channelrhodopsin-2 (ChR2) variant ChR2(C128S) in striatal medium spiny neurons (MSNs). We found that in vivo blue light stimulation significantly altered electrophysiological activity of striatal neurons and animal behaviors. To identify photoactivated neurons we then analyzed IEG expression patterns using in situ hybridization. Upon light illumination an induction of c-fos was not apparent whereas another neuronal IEG Npas4 was robustly induced in MSNs ipsilaterally. Our results demonstrate that tracing Npas4 mRNA expression following in vivo optogenetic modulation can be an effective tool for reliable and sensitive identification of activated MSNs in the mouse striatum.",
"title": ""
},
{
"docid": "bd1fdbfcc0116dcdc5114065f32a883e",
"text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.",
"title": ""
},
{
"docid": "d35cac8677052d0371d2863d54a59597",
"text": "A high-power short-pulse generator based on the diode step recovery phenomenon and high repetition rate discharges in a two-electrode gas discharge tube is presented. The proposed circuit is simple and low cost and driven by a low-power source. A full analysis of this generator is presented which, considering the nonlinear behavior of the gas tube, predicts the waveform of the output pulse. The proposed method has been shown to work properly by implementation of a kW-range prototype. Experimental measurements of the output pulse characteristics showed a rise time of 3.5 ns, with pulse repetition rate of 2.3 kHz for a 47- $\\Omega $ load. The input peak power was 2.4 W, which translated to about 0.65-kW output, showing more than 270 times increase in the pulse peak power. The efficiency of the prototype was 57%. The overall price of the employed components in the prototype was less than U.S. $2.0. An excellent agreement between the analytical and experimental test results was established. The analysis predicts that the proposed circuit can generate nanosecond pulses with more than 100-kW peak powers by using a subkW power supply.",
"title": ""
},
{
"docid": "8698c9a18ed9173b132d122237294963",
"text": "We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older/younger, make bespectacled, add smile, among others, surprisingly well–sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.",
"title": ""
},
{
"docid": "8e92ade2f4096cbfabd51e018138c2f6",
"text": "Recent results by Martin et al. (2014) showed in 3D SPH simulations that tilted discs in binary systems can be unstable to the development of global, damped Kozai–Lidov (KL) oscillations in which the discs exchange tilt for eccentricity. We investigate the linear stability of KL modes for tilted inviscid discs under the approximations that the disc eccentricity is small and the disc remains flat. By using 1D equations, we are able to probe regimes of large ratios of outer to inner disc edge radii that are realistic for binary systems of hundreds of AU separations and are not easily probed by multidimensional simulations. For order unity binary mass ratios, KL instability is possible for a window of disc aspect ratios H/r in the outer parts of a disc that roughly scale as (nb/n) 2 < ∼ H/r< ∼ nb/n, for binary orbital frequency nb and orbital frequency n at the disc outer edge. We present a framework for understanding the zones of instability based on the determination of branches of marginally unstable modes. In general, multiple growing eccentric KL modes can be present in a disc. Coplanar apsidal-nodal precession resonances delineate instability branches. We determine the range of tilt angles for unstable modes as a function of disc aspect ratio. Unlike the KL instability for free particles that involves a critical (minimum) tilt angle, disc instability is possible for any nonzero tilt angle depending on the disc aspect ratio.",
"title": ""
},
{
"docid": "10832dce0cf5d242f32d72da35e0b1c1",
"text": "Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.",
"title": ""
},
{
"docid": "4f29effabf3e7c166b29eec240ac556a",
"text": "The training algorithm of classical twin support vector regression (TSVR) can be attributed to the solution of a pair of quadratic programming problems (QPPs) with inequality constraints in the dual space. However, this solution is affected by time and memory constraints when dealing with large datasets. In this paper, we present a least squares version for TSVR in the primal space, termed primal least squares TSVR (PLSTSVR). By introducing the least squares method, the inequality constraints of TSVR are transformed into equality constraints. Furthermore, we attempt to directly solve the two QPPs with equality constraints in the primal space instead of the dual space; thus, we need only to solve two systems of linear equations instead of two QPPs. Experimental results on artificial and benchmark datasets show that PLSTSVR has comparable accuracy to TSVR but with considerably less computational time. We further investigate its validity in predicting the opening price of stock.",
"title": ""
},
{
"docid": "787979d6c1786f110ff7a47f09b82907",
"text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.",
"title": ""
},
{
"docid": "38396b59bb055eae4048f47da90e5676",
"text": "In this contribution, a novel un-differenced (UD) (PPP-RTK) concept, i.e. a synthesis of Precise Point Positioning and Network-based Real-Time Kinematic concept, is introduced. In the first step of our PPP-RTK approach, the UD GNSS observations from a regional reference network are processed based upon re-parameterised observation equations, corrections for satellite clocks, phase biases and (interpolated) atmospheric delays are calculated and provided to users. In the second step, these network-based corrections are used at the user site to restore the integer nature of his UD phase ambiguities, which makes rapid and high accuracy user positioning possible. The proposed PPP-RTK approach was tested using two GPS CORS networks with inter-station distances ranging from 60 to 100 km. The first test network is the northern China CORS network and the second is the Australian Perth CORS network. In the test of the first network, a dual-frequency PPP-RTK user receiver was used, while in the test of the second network, a low-cost, single-frequency PPP-RTK user receiver was used. The performance of fast ambiguity resolution and the high accuracy positioning of the PPP-RTK results are demonstrated.",
"title": ""
},
{
"docid": "f157b3fb65d4ce1df6d6bb549b020fa0",
"text": "We have developed a reversible method to convert color graphics and pictures to gray images. The method is based on mapping colors to low-visibility high-frequency textures that are applied onto the gray image. After receiving a monochrome textured image, the decoder can identify the textures and recover the color information. More specifically, the image is textured by carrying a subband (wavelet) transform and replacing bandpass subbands by the chrominance signals. The low-pass subband is the same as that of the luminance signal. The decoder performs a wavelet transform on the received gray image and recovers the chrominance channels. The intent is to print color images with black and white printers and to be able to recover the color information afterwards. Registration problems are discussed and examples are presented.",
"title": ""
},
{
"docid": "605d2fed747be856d0ae47ddb559d177",
"text": "Leukemia is a malignant neoplasm of the blood or bone marrow that affects both children and adults and remains a leading cause of death around the world. Acute lymphoblastic leukemia (ALL) is the most common type of leukemia and is more common among children and young adults. ALL diagnosis through microscopic examination of the peripheral blood and bone marrow tissue samples is performed by hematologists and has been an indispensable technique long since. However, such visual examinations of blood samples are often slow and are also limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the ALL diagnostic accuracy by analyzing morphological and textural features from the blood image using image processing. This paper aims at proposing a quantitative microscopic approach toward the discrimination of lymphoblasts (malignant) from lymphocytes (normal) in stained blood smear and bone marrow samples and to assist in the development of a computer-aided screening of ALL. Automated recognition of lymphoblasts is accomplished using image segmentation, feature extraction, and classification over light microscopic images of stained blood films. Accurate and authentic diagnosis of ALL is obtained with the use of improved segmentation methodology, prominent features, and an ensemble classifier, facilitating rapid screening of patients. Experimental results are obtained and compared over the available image data set. It is observed that an ensemble of classifiers leads to 99 % accuracy in comparison with other standard classifiers, i.e., naive Bayesian (NB), K-nearest neighbor (KNN), multilayer perceptron (MLP), radial basis functional network (RBFN), and support vector machines (SVM).",
"title": ""
},
{
"docid": "4afa66aeaf18fae2b29a0d4c855746dd",
"text": "In this work, we propose a technique that utilizes a fully convolutional network (FCN) to localize image splicing attacks. We first evaluated a single-task FCN (SFCN) trained only on the surface label. Although the SFCN is shown to provide superior performance over existing methods, it still provides a coarse localization output in certain cases. Therefore, we propose the use of a multi-task FCN (MFCN) that utilizes two output branches for multi-task learning. One branch is used to learn the surface label, while the other branch is used to learn the edge or boundary of the spliced region. We trained the networks using the CASIA v2.0 dataset, and tested the trained models on the CASIA v1.0, Columbia Uncompressed, Carvalho, and the DARPA/NIST Nimble Challenge 2016 SCI datasets. Experiments show that the SFCN and MFCN outperform existing splicing localization algorithms, and that the MFCN can achieve finer localization than the SFCN.",
"title": ""
},
{
"docid": "e735ddafd0dc48ea48e6ccb85ff96129",
"text": "Convolutional Neural Networks (CNNs) have been successfully used for many computer vision applications. It would be beneficial to these applications if the computational workload of CNNs could be reduced. In this work we analyze the linear algebraic properties of CNNs and propose an algorithmic modification to reduce their computational workload. An up to a 47% reduction can be achieved without any change in the image recognition results or the addition of any hardware accelerators.",
"title": ""
},
{
"docid": "7da0d66b512c79ebc00d676cac04eefc",
"text": "Social psychologists have often followed other scientists in treating religiosity primarily as a set of beliefs held by individuals. But, beliefs are only one facet of this complex and multidimensional construct. The authors argue that social psychology can best contribute to scholarship on religion by being relentlessly social. They begin with a social-functionalist approach in which beliefs, rituals, and other aspects of religious practice are best understood as means of creating a moral community. They discuss the ways that religion is intertwined with five moral foundations, in particular the group-focused \"binding\" foundations of Ingroup/loyalty, Authority/respect, Purity/sanctity. The authors use this theoretical perspective to address three mysteries about religiosity, including why religious people are happier, why they are more charitable, and why most people in the world are religious.",
"title": ""
},
{
"docid": "4a72f9b04ba1515c0d01df0bc9b60ed7",
"text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.",
"title": ""
},
{
"docid": "ae5fef3ebb145761efe9bca44a9cc154",
"text": "Social media has become an integral part of people’s lives. People share their daily activities, experiences, interests, and opinions on social networking websites, opening the floodgates of information that can be analyzed by marketers as well as consumers. However, low barriers to publication and easy-to-use interactive interfaces have contributed to various information quality (IQ) problems in the social media that has made obtaining timely, accurate and relevant information a challenge. Approaches such as data mining and machine learning have only begun to address these challenges. Social media has its own distinct characteristics that warrant specialized approaches. In this paper, we study the unique characteristics of social media and address how existing methods fall short in mitigating the IQ issues it faces. Despite being extensively studied, IQ theories have yet to be embraced in tackling IQ challenges in social media. We redefine social media challenges as IQ challenges. We propose an IQ and Total Data Quality Management (TDQM) approach to the Social media challenges. We map the IQ dimensions, social media categories, social media challenges, and IQ tools in order to bridge the gap between the IQ framework and its application in addressing IQ challenges in social media.",
"title": ""
},
{
"docid": "11d1a8d8cd9fdabfbdc77d4a0accf007",
"text": "Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology. We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured. In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme. Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.",
"title": ""
},
{
"docid": "147f7f8f80fbf898fb7f0ead044fa5ca",
"text": "Mirjalili in 2015, proposed a new nature-inspired meta-heuristic Moth Flame Optimization (MFO). It is inspired by the characteristics of a moth in the dark night to either fly straight towards the moon or fly in a spiral path to arrive at a nearby artificial light source. It aims to reach a brighter destination which is treated as a global solution for an optimization problem. In this paper, the original MFO is suitably modified to handle multi-objective optimization problems termed as MOMFO. Typically concepts like the introduction of archive grid, coordinate based distance for sorting, non-dominance of solutions make the proposed approach different from the original single objective MFO. The performance of proposed MOMFO is demonstrated on six benchmark mathematical function optimization problems regarding superior accuracy and lower computational time achieved compared to Non-dominated sorting genetic algorithm-II (NSGA-II) and Multi-objective particle swarm optimization (MOPSO).",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
},
{
"docid": "cefe0f801917dfacae48a947199256a7",
"text": "Web search has emerged as one of the most important applications on the internet, with several search engines available to the users. There is a common practice among these search engines to log and analyse the user queries, which leads to serious privacy implications. One well known solution to search privacy involves issuing the queries via an anonymizing network, such as Tor, thereby hiding one's identity from the search engine. A fundamental problem with this solution, however, is that user queries are still obviously revealed to the search engine, although they are \"mixed\" among the queries issued by other users of the same anonymization service.\n In this paper, we consider the problem of identifying the queries of a user of interest (UOI) within a pool of queries received by a search engine over an anonymizing network. We demonstrate that an adversarial search engine can extract the UOI's queries, when it is equipped with only a short-term user search query history, by utilizing only the query content information and off-the-shelf machine learning classifiers. More specifically, by treating a selected set of 60 users --- from the publicly-available AOL search logs --- as the users of interest performing web search over an anonymizing network, we show that each user's queries can be identified with 25.95% average accuracy, when mixed with queries of 99 other users of the anonymization service. This average accuracy drops to 18.95% when queries of 999 other users of the anonymization service are mixed together. Though the average accuracies are not so high, our results indicate that few users of interest could be identified with accuracies as high as 80--98%, even when their queries are mixed among queries of 999 other users. Our results cast serious doubts on the effectiveness of anonymizing web search queries by means of anonymizing networks.",
"title": ""
}
] |
scidocsrr
|
e8f7ea82049f1d52c4b99239d3a193f0
|
Geometric modeling using octree encoding
|
[
{
"docid": "d004f3eb6dad2276a8754612ef977ccc",
"text": "Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-and-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maxima, range searching, closest pair, and all nearest neighbor problems. The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper: a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems.",
"title": ""
}
] |
[
{
"docid": "7c9cd59a4bb14f678c57ad438f1add12",
"text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.",
"title": ""
},
{
"docid": "a162d5e622bb7fa8f281e7c9b5943346",
"text": "The Legionellae are Gram-negative bacteria able to survive and replicate in a wide range of protozoan hosts in natural environments, but they also occur in man-made aquatic systems, which are the major source of infection. After transmission to humans via aerosols, Legionella spp. can cause pneumonia (Legionnaires’ disease) or influenza-like respiratory infections (Pontiac fever). In children, Legionnaires’ disease is uncommon and is mainly diagnosed in children with immunosuppression. The clinical picture of Legionella pneumonia does not allow differentiation from pneumonia caused by others pathogens. The key to diagnosis is performing appropriate microbiological testing. The clinical presentation and the natural course of Legionnaires’ disease in children are not clear due to an insufficient number of samples, but morbidity and mortality caused by this infection are extremely high. The mortality rate for legionellosis depends on the promptness of an appropriate antibiotic therapy. Fluoroquinolones are the most efficacious drugs against Legionella. A combination of these drugs with macrolides seems to be promising in the treatment of immunosuppressed patients and individuals with severe legionellosis. Although all Legionella species are considered potentially pathogenic for humans, Legionella pneumophila is the etiological agent responsible for most reported cases of community-acquired and nosocomial legionellosis.",
"title": ""
},
{
"docid": "ba58ba95879516c00d91cf75754eb131",
"text": "In order to assess the current knowledge on the therapeutic potential of cannabinoids, a meta-analysis was performed through Medline and PubMed up to July 1, 2005. The key words used were cannabis, marijuana, marihuana, hashish, hashich, haschich, cannabinoids, tetrahydrocannabinol, THC, dronabinol, nabilone, levonantradol, randomised, randomized, double-blind, simple blind, placebo-controlled, and human. The research also included the reports and reviews published in English, French and Spanish. For the final selection, only properly controlled clinical trials were retained, thus open-label studies were excluded. Seventy-two controlled studies evaluating the therapeutic effects of cannabinoids were identified. For each clinical trial, the country where the project was held, the number of patients assessed, the type of study and comparisons done, the products and the dosages used, their efficacy and their adverse effects are described. Cannabinoids present an interesting therapeutic potential as antiemetics, appetite stimulants in debilitating diseases (cancer and AIDS), analgesics, and in the treatment of multiple sclerosis, spinal cord injuries, Tourette's syndrome, epilepsy and glaucoma.",
"title": ""
},
{
"docid": "cd549297cb4644aaf24c28b5bbdadb24",
"text": "This study identifies the difference in the perceptions of academic stress and reaction to stressors based on gender among first year university students in Nigeria. Student Academic Stress Scale (SASS) was the instrument used to collect data from 2,520 first year university students chosen through systematic random sampling from Universities in the six geo-political zones of Nigeria. To determine gender differences among the respondents, independent samples t-test was used via SPSS version 15.0. The results of research showed that male and female respondents differed significantly in their perceptions of frustrations, financials, conflicts and selfexpectations stressors but did not significantly differ in their perceptions of pressures and changesrelated stressors. Generally, no significant difference was found between male and female respondents in their perceptions of academic stressors, however using the mean scores as basis, female respondents scored higher compared to male respondents. Regarding reaction to stressors, male and female respondents differ significantly in their perceptions of emotional and cognitive reactions but did not differ significantly in their perceptions of physiological and behavioural reaction to stressors.",
"title": ""
},
{
"docid": "8dadd14f0de2a17ca5066703a19f1aff",
"text": "Human gait provides a way of locomotion by combined efforts of the brain, nerves, and muscles. Conventionally, the human gait has been considered subjectively through visual observations but now with advanced technology, human gait analysis can be done objectively and empirically for the better quality of life. In this paper, the literature of the past survey on gait analysis has been discussed. This is followed by discussion on gait analysis methods. Vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. Data parameters for gait analysis have been discussed followed by preprocessing steps. Then the implemented machine learning techniques have been discussed in detail. The objective of this survey paper is to present a comprehensive analysis of contemporary gait analysis. This paper presents a framework (parameters, techniques, available database, machine learning techniques, etc.) for researchers in identifying the infertile areas of gait analysis. The authors expect that the overview presented in this paper will help advance the research in the field of gait analysis. Introduction to basic taxonomies of human gait is presented. Applications in clinical diagnosis, geriatric care, sports, biometrics, rehabilitation, and industrial area are summarized separately. Available machine learning techniques are also presented with available datasets for gait analysis. Future prospective in gait analysis are discussed in the end.",
"title": ""
},
{
"docid": "e473e6b4c5d825582f3a5afe00a005de",
"text": "This paper explores and quantifies garbage collection behavior for three whole heap collectors and generational counterparts: copying semi-space, mark-sweep, and reference counting, the canonical algorithms from which essentially all other collection algorithms are derived. Efficient implementations in MMTk, a Java memory management toolkit, in IBM's Jikes RVM share all common mechanisms to provide a clean experimental platform. Instrumentation separates collector and program behavior, and performance counters measure timing and memory behavior on three architectures.Our experimental design reveals key algorithmic features and how they match program characteristics to explain the direct and indirect costs of garbage collection as a function of heap size on the SPEC JVM benchmarks. For example, we find that the contiguous allocation of copying collectors attains significant locality benefits over free-list allocators. The reduced collection costs of the generational algorithms together with the locality benefit of contiguous allocation motivates a copying nursery for newly allocated objects. These benefits dominate the overheads of generational collectors compared with non-generational and no collection, disputing the myth that \"no garbage collection is good garbage collection.\" Performance is less sensitive to the mature space collection algorithm in our benchmarks. However the locality and pointer mutation characteristics for a given program occasionally prefer copying or mark-sweep. This study is unique in its breadth of garbage collection algorithms and its depth of analysis.",
"title": ""
},
{
"docid": "ef2996a04c819777cc4b88c47f502c21",
"text": "Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]",
"title": ""
},
{
"docid": "9e7e7a3c4ec5db247cfe3f61b1dbceaa",
"text": "Digital information displays are becoming more common in public spaces such as museums, galleries, and libraries. However, the public nature of these locations requires special considerations concerning the design of information visualization in terms of visual representations and interaction techniques. We discuss the potential for, and challenges of, information visualization in the museum context based on our practical experience with EMDialog, an interactive information presentation that was part of the Emily Carr exhibition at the Glenbow Museum in Calgary. EMDialog visualizes the diverse and multi-faceted discourse about this Canadian artist with the goal to both inform and provoke discussion. It provides a visual exploration environment that offers interplay between two integrated visualizations, one for information access along temporal, and the other along contextual dimensions. We describe the results of an observational study we conducted at the museum that revealed the different ways visitors approached and interacted with EMDialog, as well as how they perceived this form of information presentation in the museum context. Our results include the need to present information in a manner sufficiently attractive to draw attention and the importance of rewarding passive observation as well as both short- and longer term information exploration.",
"title": ""
},
{
"docid": "6c504c7a69dba18e8cbc6a3678ab4b09",
"text": "This letter presents a compact model for flexible analog/RF circuits design with amorphous indium-gallium-zinc oxide thin-film transistors (TFTs). The model is based on the MOSFET LEVEL=3 SPICE model template, where parameters are fitted to measurements for both dc and ac characteristics. The proposed TFT compact model shows good scalability of the drain current for device channel lengths ranging from 50 to 3.6 μm. The compact model is validated by comparing measurements and simulations of various TFT amplifier circuits. These include a two-stage cascode amplifier showing 10 dB of voltage gain and 2.9 MHz of bandwidth.",
"title": ""
},
{
"docid": "e519d705cd52b4eb24e4e936b849b3ce",
"text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.",
"title": ""
},
{
"docid": "4f355aa038e56b9449181eb780e05484",
"text": "Composite indices or pooled indices are useful tools for the evaluation of disease activity in patients with rheumatoid arthritis (RA). They allow the integration of various aspects of the disease into a single numerical value, and may therefore facilitate consistent patient care and improve patient compliance, which both can lead to improved outcomes. The Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI) are two new tools for the evaluation of disease activity in RA. They have been developed to provide physicians and patients with simple and more comprehensible instruments. Moreover, the CDAI is the only composite index that does not incorporate an acute phase response and can therefore be used to conduct a disease activity evaluation essentially anytime and anywhere. These two new tools have not been developed to replace currently available instruments such as the DAS28, but rather to provide options for different environments. The comparative construct, content, and discriminant validity of all three indices--the DAS28, the SDAI, and the CDAI--allow physicians to base their choice of instrument on their infrastructure and their needs, and all of them can also be used in clinical trials.",
"title": ""
},
{
"docid": "70fac5e4b287e8f47a4eec44f5c36373",
"text": "In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.",
"title": ""
},
{
"docid": "83d788ffb340b89c482965b96d6803c2",
"text": "A dead-time compensation method in voltage-source inverters (VSIs) is proposed. The method is based on a feedforward approach which produces compensating signals obtained from those of the I/sub d/-I/sub q/ current and primary angular frequency references in a rotating reference (d-q) frame. The method features excellent inverter output voltage distortion correction for both fundamental and harmonic components. The correction is not affected by the magnitude of the inverter output voltage or current distortions. Since this dead-time compensation method allows current loop calculations in the d-q frame at a slower sampling rate with a conventional microprocessor than calculations in a stationary reference frame, a fully digital, vector-controlled speed regulator with just a current component loop is realized for PWM (pulsewidth modulation) VSIs. Test results obtained for the compression method are described.<<ETX>>",
"title": ""
},
{
"docid": "aa7fe787492aa8aa3d50f748b2df17cb",
"text": "Smart Contracts sind rechtliche Vereinbarungen, die sich IT-Technologien bedienen, um die eigene Durchsetzbarkeit sicherzustellen. Es werden durch Smart Contracts autonom Handlungen initiiert, die zuvor vertraglich vereinbart wurden. Beispielsweise können vereinbarte Zahlungen von Geldbeträgen selbsttätig veranlasst werden. Basieren Smart Contracts auf Blockchains, ergeben sich per se vertrauenswürdige Transaktionen. Eine dritte Instanz zur Sicherstellung einer korrekten Transaktion, beispielsweise eine Bank oder ein virtueller Marktplatz, wird nicht benötigt. Echte Peer-to-Peer-Verträge sind möglich. Ein weiterer Anwendungsfall von Smart Contracts ist denkbar. Smart Contracts könnten statt Vereinbarungen von Vertragsparteien gesetzliche Regelungen ausführen. Beispielsweise die Regelungen des Patentgesetzes könnten durch einen Smart Contract implementiert werden. Die Verwaltung von IPRs (Intellectual Property Rights) entsprechend den gesetzlichen Regelungen würde dadurch sichergestellt werden. Bislang werden Spezialisten, beispielsweise Patentanwälte, benötigt, um eine akkurate Administration von Schutzrechten zu gewährleisten. Smart Contracts könnten die Dienstleistungen dieser Spezialisten auf dem Gebiet des geistigen Eigentums obsolet werden lassen.",
"title": ""
},
{
"docid": "b045e59c52ff1d555f79831f96309d5c",
"text": "In this paper, we show that for several clustering problems one can extract a small set of points, so that using those core-sets enable us to perform approximate clustering efficiently. The surprising property of those core-sets is that their size is independent of the dimension.Using those, we present a (1+ ε)-approximation algorithms for the k-center clustering and k-median clustering problems in Euclidean space. The running time of the new algorithms has linear or near linear dependency on the number of points and the dimension, and exponential dependency on 1/ε and k. As such, our results are a substantial improvement over what was previously known.We also present some other clustering results including (1+ ε)-approximate 1-cylinder clustering, and k-center clustering with outliers.",
"title": ""
},
{
"docid": "235899b940c658316693d0a481e2d954",
"text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.",
"title": ""
},
{
"docid": "7098df58dc9f86c9b462610f03bd97a6",
"text": "The advent of the computer and computer science, and in particular virtual reality, offers new experiment possibilities with numerical simulations and introduces a new type of investigation for the complex systems study : the in virtuo experiment. This work lies on the framework of multi-agent systems. We propose a generic model for systems biology based on reification of the interactions, on a concept of organization and on a multi-model approach. By ``reification'' we understand that interactions are considered as autonomous agents. The aim has been to combine the systemic paradigm and the virtual reality to provide an application able to collect, simulate, experiment and understand the knowledge owned by different biologists working around an interdisciplinary subject. In that case, we have been focused on the urticaria disease understanding. The method permits to integrate different natures of model. We have modeled biochemical reactions, molecular diffusion, cell organisations and mechanical interactions. It also permits to embed different expert system modeling methods like fuzzy cognitive maps.",
"title": ""
},
{
"docid": "c91fe61e7ef90867377940644b566d93",
"text": "The adoption of Learning Management Systems to create virtual learning communities is a unstructured form of allowing collaboration that is rapidly growing. Compared to other systems that structure interactions, these environments provide data of the interaction performed at a very low level. For assessment purposes, this fact poses some difficulties to derive higher lever indicators of collaboration. In this paper we propose to shape the analysis problem as a data mining task. We suggest that the typical data mining cycle bears many resemblances with proposed models for collaboration management. We present some preliminary experiments using clustering to discover patterns reflecting user behaviors. Results are very encouraging and suggest several research directions.",
"title": ""
},
{
"docid": "56206ddb152c3a09f3e28a6ffa703cd6",
"text": "This chapter introduces the operation and control of a Doubly-fed Induction Generator (DFIG) system. The DFIG is currently the system of choice for multi-MW wind turbines. The aerodynamic system must be capable of operating over a wide wind speed range in order to achieve optimum aerodynamic efficiency by tracking the optimum tip-speed ratio. Therefore, the generator’s rotor must be able to operate at a variable rotational speed. The DFIG system therefore operates in both suband super-synchronous modes with a rotor speed range around the synchronous speed. The stator circuit is directly connected to the grid while the rotor winding is connected via slip-rings to a three-phase converter. For variable-speed systems where the speed range requirements are small, for example ±30% of synchronous speed, the DFIG offers adequate performance and is sufficient for the speed range required to exploit typical wind resources. An AC-DC-AC converter is included in the induction generator rotor circuit. The power electronic converters need only be rated to handle a fraction of the total power – the rotor power – typically about 30% nominal generator power. Therefore, the losses in the power electronic converter can be reduced, compared to a system where the converter has to handle the entire power, and the system cost is lower due to the partially-rated power electronics. This chapter will introduce the basic features and normal operation of DFIG systems for wind power applications basing the description on the standard induction generator. Different aspects that will be described include their variable-speed feature, power converters and their associated control systems, and application issues.",
"title": ""
},
{
"docid": "004743271b82054bae970bd0d17c1bd3",
"text": "In 1934, Jordan et al. gave a necessary algebraic condition, the Jordan identity, for a sensible theory of quantum mechanics. All but one of the algebras that satisfy this condition can be described by Hermitian matrices over the complexes or quaternions. The remaining, exceptional Jordan algebra can be described by 3 × 3 Hermitian matrices over the octonions. We first review properties of the octonions and the exceptional Jordan algebra, including our previous work on the octonionic Jordan eigenvalue problem. We then examine a particular real, noncompact form of the Lie group E6, which preserves determinants in the exceptional Jordan algebra. Finally, we describe a possible symmetry-breaking scenario within E6: first choose one of the octonionic directions to be special, then choose one of the 2× 2 submatrices inside the 3× 3 matrices to be special. Making only these two choices, we are able to describe many properties of leptons in a natural way. We further speculate on the ways in which quarks might be similarly encoded.",
"title": ""
}
] |
scidocsrr
|
5dc422d341c24e0e150d2f58755292e9
|
Attribute extraction and scoring: A probabilistic approach
|
[
{
"docid": "b1a08b10ea79a250a62030a2987b67a6",
"text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.",
"title": ""
}
] |
[
{
"docid": "bebead03e8645e35a304a425dc34e038",
"text": "Given the potential importance of technology parks, their complexity in terms of the scope of required investment and the growing interest of governments to use them as tools for creating sustainable development there is a pressing need for a better understanding of the critical success factors of these entities. However, Briggs and watt (2001) argued that the goal of many technology parks and the factors driving innovation success are still a mystery. In addition, it is argued that the problem with analyzing technology parks and cluster building is that recent studies analyze “the most celebrated case studies... to ‘explain’ their success” (Holbrook and Wolfe, 2002). This study uses intensive interviewing of technology parks’ managers and managers of tenant firms in the technology park to explore critical success factors of four of Australia’s' technology parks. The study identified the following critical success factors: a culture of risk-taking “entrepreneurism”, an autonomous park management that is independent of university officials and government bureaucrats, an enabling environment, a critical mass of companies that allows for synergies within the technology park, the presence of internationally renounced innovative companies, and finally a shared vision among the technology park stakeholders.",
"title": ""
},
{
"docid": "c8edb6b8ed8176368faf591161718b95",
"text": "A new 4-group model of attachment styles in adulthood is proposed. Four prototypic attachment patterns are defined using combinations of a person's self-image (positive or negative) and image of others (positive or negative). In Study 1, an interview was developed to yield continuous and categorical ratings of the 4 attachment styles. Intercorrelations of the attachment ratings were consistent with the proposed model. Attachment ratings were validated by self-report measures of self-concept and interpersonal functioning. Each style was associated with a distinct profile of interpersonal problems, according to both self- and friend-reports. In Study 2, attachment styles within the family of origin and with peers were assessed independently. Results of Study 1 were replicated. The proposed model was shown to be applicable to representations of family relations; Ss' attachment styles with peers were correlated with family attachment ratings.",
"title": ""
},
{
"docid": "4ab971e837286b95ebbdd1f99c6749c0",
"text": "In this paper we demonstrate results of a technique for synchronizing clocks and estimating ranges between a pair of RF transceivers. The technique uses a periodic exchange of ranging waveforms between two transceivers along with sophisticated delay estimation and tracking. The technique was implemented on wireless testbed transceivers with independent clocks and tested over-the-air in stationary and moving configurations. The technique achieved ~10ps synchronization accuracy and 2.1mm range deviation, using A two-channel oscilloscope and tape measure as truth sources. The timing resolution attained is three orders of magnitude better than the inverse signal bandwidth of the ranging waveform (50MHz⇒ 6m resolution), and is within a small fraction of the carrier wavelength (915MHz⇒ 327mm wavelength). We discuss how this result is consistent with the Weiss-Weinstein bound and cite new applications enabled by this technique.",
"title": ""
},
{
"docid": "29e1a872da2b6432b30d4620a9cd692b",
"text": "Fibromyalgia and depression might represent two manifestations of affective spectrum disorder. They share similar pathophysiology and are largely targeted by the same drugs with dual action on serotoninergic and noradrenergic systems. Here, we review evidence for genetic and environmental factors that predispose, precipitate, and perpetuate fibromyalgia and depression and include laboratory findings on the role of depression in fibromyalgia. Further, we comment on several aspects of fibromyalgia which support the development of reactive depression, substantially more so than in other chronic pain syndromes. However, while sharing many features with depression, fibromyalgia is associated with somatic comorbidities and absolutely defined by fluctuating spontaneous widespread pain. Fibromyalgia may, therefore, be more appropriately grouped together with other functional pain disorders, while psychologically distressed subgroups grouped additionally or solely with affective spectrum disorders.",
"title": ""
},
{
"docid": "232eabfb63f0b908ef3a64d0731ba358",
"text": "This paper reviews the potential of spin-transfer torque devices as an alternative to complementary metal-oxide-semiconductor for non-von Neumann and non-Boolean computing. Recent experiments on spin-transfer torque devices have demonstrated high-speed magnetization switching of nanoscale magnets with small current densities. Coupled with other properties, such as nonvolatility, zero leakage current, high integration density, we discuss that the spin-transfer torque devices can be inherently suitable for some unconventional computing models for information processing. We review several spintronic devices in which magnetization can be manipulated by current induced spin transfer torque and explore their applications in neuromorphic computing and reconfigurable memory-based computing.",
"title": ""
},
{
"docid": "a77c113c691a61101cba1136aaf4b90c",
"text": "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
"title": ""
},
{
"docid": "2a00d77cb75767b3e4516ced59ea84f6",
"text": "Men and women living in a rural community in Bakossiland, Cameroon were asked to rate the attractiveness of images of male or female figures manipulated to vary in somatotype, waist-to-hip ratio (WHR), secondary sexual traits, and other features. In Study 1, women rated mesomorphic (muscular) and average male somatotypes as most attractive, followed by ectomorphic (slim) and endomorphic (heavily built) figures. In Study 2, amount and distribution of masculine trunk (chest and abdominal) hair was altered progressively in a series of front-posed male figures. A significant preference for one of these images was found, but the most hirsute figure was not judged as most attractive. Study 3 assessed attractiveness of front-posed male figures which varied only in length of the non-erect penis. Extremes of penile size (smallest and largest of five images) were rated as significantly less attractive than three intermediate sizes. In Study 4, Bakossi men rated the attractiveness of back-posed female images varying in WHR (from 0.5-1.0). The 0.8 WHR figure was rated markedly more attractive than others. Study 5 rated the attractiveness of female skin color. Men expressed no consistent preference for either lighter or darker female figures. These results are the first of their kind reported for a Central African community and provide a useful cross-cultural perspective to published accounts on sexual selection, human morphology and attractiveness in the U.S., Europe, and elsewhere.",
"title": ""
},
{
"docid": "3d335bfc7236ea3596083d8cae4f29e3",
"text": "OBJECTIVE\nTo summarise the applications and appropriate use of Dietary Reference Intakes (DRIs) as guidance for nutrition and health research professionals in the dietary assessment of groups and individuals.\n\n\nDESIGN\nKey points from the Institute of Medicine report, Dietary Reference Intakes: Applications in Dietary Assessment, are summarised in this paper. The different approaches for using DRIs to evaluate the intakes of groups vs. the intakes of individuals are highlighted.\n\n\nRESULTS\nEach of the new DRIs is defined and its role in the dietary assessment of groups and individuals is described. Two methods of group assessment and a new method for quantitative assessment of individuals are described. Illustrations are provided on appropriate use of the Estimated Average Requirement (EAR), the Adequate Intake (AI) and the Tolerable Upper Intake Level (UL) in dietary assessment.\n\n\nCONCLUSIONS\nDietary assessment of groups or individuals must be based on estimates of usual (long-term) intake. The EAR is the appropriate DRI to use in assessing groups and individuals. The AI is of limited value in assessing nutrient adequacy, and cannot be used to assess the prevalence of inadequacy. The UL is the appropriate DRI to use in assessing the proportion of a group at risk of adverse health effects. It is inappropriate to use the Recommended Dietary Allowance (RDA) or a group mean intake to assess the nutrient adequacy of groups.",
"title": ""
},
{
"docid": "2c04fd272c90a8c0a74a16980fcb5b03",
"text": "We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.",
"title": ""
},
{
"docid": "c8a16019564d99007efd88ca23d44d30",
"text": "Cardiac masses are rare entities that can be broadly categorized as either neoplastic or non-neoplastic. Neoplastic masses include benign and malignant tumors. In the heart, metastatic tumors are more common than primary malignant tumors. Whether incidentally found or diagnosed as a result of patients' symptoms, cardiac masses can be identified and further characterized by a range of cardiovascular imaging options. While echocardiography remains the first-line imaging modality, cardiac computed tomography (cardiac CT) has become an increasingly utilized modality for the assessment of cardiac masses, especially when other imaging modalities are non-diagnostic or contraindicated. With high isotropic spatial and temporal resolution, fast acquisition times, and multiplanar image reconstruction capabilities, cardiac CT offers an alternative to cardiovascular magnetic resonance imaging in many patients. Additionally, cardiac masses may be incidentally discovered during cardiac CT for other reasons, requiring imagers to understand the unique features of a diverse range of cardiac masses. Herein, we define the characteristic imaging features of commonly encountered and selected cardiac masses and define the role of cardiac CT among noninvasive imaging options.",
"title": ""
},
{
"docid": "28152cab5f477d9620edaab440467de2",
"text": "The ever-increasing density in cloud computing parties, i.e. users, services, providers and data centres, has led to a significant exponential growth in: data produced and transferred among the cloud computing parties; network traffic; and the energy consumed by the cloud computing massive infrastructure, which is required to respond quickly and effectively to users requests. Transferring big data volume among the aforementioned parties requires a high bandwidth connection, which consumes larger amounts of energy than just processing and storing big data on cloud data centres, and hence producing high carbon dioxide emissions. This power consumption is highly significant when transferring big data into a data centre located relatively far from the users geographical location. Thus, it became high-necessity to locate the lowest energy consumption route between the user and the designated data centre, while making sure the users requirements, e.g. response time, are met. The main contribution of this paper is GreeDi, a network-based routing algorithm to find the most energy efficient path to the cloud data centre for processing and storing big data. The algorithm is, first, formalised by the situation calculus. The linear, goal and dynamic programming approaches used to model the algorithm. The algorithm is then evaluated against the baseline shortest path algorithm with minimum number of nodes traversed, using a real Italian ISP physical network topology.",
"title": ""
},
{
"docid": "69e4bb63a9041b3c95fba1a903bc0e5c",
"text": "Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing.",
"title": ""
},
{
"docid": "51e6db842735ae89419612bf831fce54",
"text": "In this work, we focus on automatically recognizing social conversational strategies that in human conversation contribute to building, maintaining or sometimes destroying a budding relationship. These conversational strategies include self-disclosure, reference to shared experience, praise and violation of social norms. By including rich contextual features drawn from verbal, visual and vocal modalities of the speaker and interlocutor in the current and previous turn, we can successfully recognize these dialog phenomena with an accuracy of over 80% and kappa ranging from 60-80%. Our findings have been successfully integrated into an end-to-end socially aware dialog system, with implications for virtual agents that can use rapport between user and system to improve task-oriented assistance.",
"title": ""
},
{
"docid": "18e248fd8cb1520f9e42353291c15870",
"text": "This paper addresses the problem of automatically estimating the relative pose between a push-broom LIDAR and a camera without the need for artificial calibration targets or other human intervention. Further we do not require the sensors to have an overlapping field of view, it is enough that they observe the same scene but at different times from a moving platform. Matching between sensor modalities is achieved without feature extraction. We present results from field trials which suggest that this new approach achieves an extrinsic calibration accuracy of millimeters in translation and deci-degrees in rotation.",
"title": ""
},
{
"docid": "284fbe98ebbca22efe3edd9b700ba053",
"text": "In this paper, we present a new adaptive dynamic programming approach by integrating a reference network that provides an internal goal representation to help the systems learning and optimization. Specifically, we build the reference network on top of the critic network to form a dual critic network design that contains the detailed internal goal representation to help approximate the value function. This internal goal signal, working as the reinforcement signal for the critic network in our design, is adaptively generated by the reference network and can also be adjusted automatically. In this way, we provide an alternative choice rather than crafting the reinforcement signal manually from prior knowledge. In this paper, we adopt the online action-dependent heuristic dynamic programming (ADHDP) design and provide the detailed design of the dual critic network structure. Detailed Lyapunov stability analysis for our proposed approach is presented to support the proposed structure from a theoretical point of view. Furthermore, we also develop a virtual reality platform to demonstrate the real-time simulation of our approach under different disturbance situations. The overall adaptive learning performance has been tested on two tracking control benchmarks with a tracking filter. For comparative studies, we also present the tracking performance with the typical ADHDP, and the simulation results justify the improved performance with our approach.",
"title": ""
},
{
"docid": "941ee166d7fdeff90ae9815d427f1cf1",
"text": "PURPOSE\nTo estimate the risk of nerve injuries and assess outcomes after sodium tetradecyl sulfate (STS) sclerotherapy of venous malformations (VMs) in children.\n\n\nMATERIALS AND METHODS\nSclerotherapy is the treatment of choice for most VMs, but all sclerotherapy agents are associated with the risk of complications. Neuropathy is considered a rare but potentially serious complication of venous sclerotherapy. The institutional review board waived ethical approval for this retrospective review, in which 647 sclerotherapy procedures were performed in 204 patients (104 female and 100 male patients; mean age, 9 years 6 months [range, 6 months to 17 years 11 months]) as treatment for symptomatic VMs. Technical and clinical success of the treatment was evaluated. Complications were reviewed with a particular focus on nerve injury. Informed consent, specifying the risk of neuropathy, as well as pain, swelling, infection, risks of anesthesia, skin injury, nonresolution or worsening of symptoms, and possible need for further or multiple procedures, was obtained for all patients. Standard sclerotherapy techniques were used. Technical details of all procedures were recorded prospectively. Follow-up included immediate postprocedural assessment and outpatient clinic review. All nerve injuries were recorded. Patients were monitored and treated according to clinical need. Confidence intervals were calculated by using the Wilson method, without correction for continuity.\n\n\nRESULTS\nTreatment was technically successful in 197 of 204 patients (96.6%), and clinical success was achieved in 174 of 204 (85.3%). Thirty-seven of the 647 procedures (5.7%) resulted in a complication, including 11 cases of excessive swelling, nine cases of skin injury, two patients with infection, and two with pain. Motor and/or sensory nerve injuries occurred after seven procedures (1.1%). Five of the seven children had undergone at least one previous sclerotherapy procedure. Neuropathy resolved spontaneously in four patients and partially recovered in three, of whom two underwent surgery. Surgery included debridement of necrotic tissue, carpal tunnel decompression, and external neurolysis.\n\n\nCONCLUSION\nNerve injury is an unusual but not rare complication of STS sclerotherapy. A degree of recovery, which may be complete, can be expected in most patients.",
"title": ""
},
{
"docid": "89aa60cefe11758e539f45c5cba6f48a",
"text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html",
"title": ""
},
{
"docid": "af7f83599c163d0f519f1e2636ae8d44",
"text": "There is a set of characterological attributes thought to be associated with developing success at critical thinking (CT). This paper explores the disposition toward CT theoretically, and then as it appears to be manifest in college students. Factor analytic research grounded in a consensus-based conceptual analysis of CT described seven aspects of the overall disposition toward CT: truth-seeking, open-mindedness, analyticity, systematicity, CTconfidence, inquisitiveness, and cognitive maturity. The California Critical Thinking Disposition Inventory (CCTDI), developed in 1992, was used to sample college students at two comprehensive universities. Entering college freshman students showed strengths in openmindedness and inquisitiveness, weaknesses in systematicity and opposition to truth-seeking. Additional research indicates the disposition toward CT is highly correlated with the psychological constructs of absorption and openness to experience, and strongly predictive of ego-resiliency. A preliminary study explores the interesting and potentially complex interrelationship between the disposition toward CT and CT abilities. In addition to the significance of this work for psychological studies of human development, empirical research on the disposition toward CT promises important implications for all levels of education. 1 This essay appeared as Facione, PA, Sánchez, (Giancarlo) CA, Facione, NC & Gainen, J., (1995). The disposition toward critical thinking. Journal of General Education. Volume 44, Number(1). 1-25.",
"title": ""
},
{
"docid": "5995a2775a6a10cf4f2bd74a2959935d",
"text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.",
"title": ""
},
{
"docid": "795b64fb3ebead2b565f66558a7be063",
"text": "Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and the practice of modeling, designing, and implementing computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. Moreover, even less effort has been devoted to discussing the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures. 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
4f1b229c9c6c024684909af800d52432
|
Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions
|
[
{
"docid": "3824a61e476fa359a104d03f7a99262c",
"text": "We describe an artificial ant colony capable of solving the travelling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.",
"title": ""
}
] |
[
{
"docid": "4dc5aee7d80e2204cc8b2e9305149cca",
"text": "MapReduce offers an ease-of-use programming paradigm for processing large data sets, making it an attractive model for distributed volunteer computing systems. However, unlike on dedicated resources, where MapReduce has mostly been deployed, such volunteer computing systems have significantly higher rates of node unavailability. Furthermore, nodes are not fully controlled by the MapReduce framework. Consequently, we found the data and task replication scheme adopted by existing MapReduce implementations woefully inadequate for resources with high unavailability.\n To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. Our tests on an emulated volunteer computing system, which uses a 60-node cluster where each node possesses a similar hardware configuration to a typical computer in a student lab, demonstrate that MOON can deliver a three-fold performance improvement to Hadoop in volatile, volunteer computing environments.",
"title": ""
},
{
"docid": "180672be0e49be493d9af3ef7b558804",
"text": "Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’’ or ‘‘determinants’’ and (2) a manipulation where one (or more) of the ‘‘factors’’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis",
"title": ""
},
{
"docid": "0375f63836b083e64a2914dbbf420b17",
"text": "The objective of the present case report is to punctuate the importance of individualized therapy procedures and the accurate diagnosis of the muscles involved in oromandibular dystonia and underline the role of electromyography (EMG). We report a woman who presented sustained jaw movement towards the left, severe difficulty in jaw opening and jaw protrusion. The patient was treated with injections of botulinum A toxin in lateral pterygoid, masseter, platysma, sternoclidomastoid, temporalis muscles with EMG guidance. She experienced an 80% reduction of her symptoms after the first injection. In jaw deviation dystonia symptoms impressively respond to botulinum toxin treatment of the pterygoid muscle. Individualized therapy procedures are necessitated.",
"title": ""
},
{
"docid": "8e764dc66f3e460018638ff633b81184",
"text": "Cloud computing is an emerging technology in distributed computing which facilitates pay per model as per user demand and requirement. Cloud consist of a collection of virtual machine which includes both computational and storage facility. The primary aim of cloud computing is to provide efficient access to remote and geographically distributed resources. Cloud is developing day by day and faces many challenges, one of them is scheduling. Scheduling refers to a set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its scheduling strategy according to the changing environment and the type of task. In this research paper we presented a Generalized Priority algorithm for efficient execution of task and comparison with FCFS and Round Robin Scheduling. Algorithm should be tested in cloud Sim toolkit and result shows that it gives better performance compared to other traditional scheduling algorithm.",
"title": ""
},
{
"docid": "cf5f21e8f0d2ba075f2061c7a69b622a",
"text": "This article presents guiding principles for the assessment of competence developed by the members of the American Psychological Association’s Task Force on Assessment of Competence in Professional Psychology. These principles are applicable to the education, training, and credentialing of professional psychologists, and to practicing psychologists across the professional life span. The principles are built upon a review of competency assessment models, including practices in both psychology and other professions. These principles will help to ensure that psychologists reinforce the importance of a culture of competence. The implications of the principles for professional psychology also are highlighted.",
"title": ""
},
{
"docid": "547423c409d466bcb537a7b0ae0e1758",
"text": "Sequential Bayesian estimation fornonlinear dynamic state-space models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has much-improved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper.",
"title": ""
},
{
"docid": "cfc27935a5d53d5c2c92847f4e200a9b",
"text": "Li Gao, Jia Wu, Hong Yang, Zhi Qiao, Chuan Zhou, Yue Hu Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China Quantum Computation & Intelligent Systems Centre, University of Technology Sydney, Australia MathWorks, Beijing, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China {gaoli, huyue}@iie.ac.cn, zhiqiao.ict@gmail.com, hong.yang@mathworks.cn, jia.wu@uts.edu.au",
"title": ""
},
{
"docid": "2e9b98fbb1fa15020b374dbd48fb5adc",
"text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.",
"title": ""
},
{
"docid": "e3977392317f51a7cd1742e93a48bea2",
"text": "There is increasing amount of evidence pointing toward a high prevalence of psychiatric conditions among individuals with hypermobile type of Ehlers-Danlos syndrome (JHS/hEDS). A literature review confirms a strong association between anxiety disorders and JHSh/hEDS, and there is also limited but growing evidence that JHSh/hEDS is also associated with depression, eating, and neuro-developmental disorders as well as alcohol and tobacco misuse. The underlying mechanisms behind this association include genetic risks, autonomic nervous system dysfunction, increased exteroceptive and interoceptive mechanisms and decreased proprioception. Recent neuroimaging studies have also shown an increase response in emotion processing brain areas which could explain the high affective reactivity seen in JHS/hEDS. Management of these patients should include psychiatric and psychological approaches, not only to relieve the clinical conditions but also to improve abilities to cope through proper drug treatment, psychotherapy, and psychological rehabilitation adequately coupled with modern physiotherapy. A multidimensional approach to this \"neuroconnective phenotype\" should be implemented to ensure proper assessment and to guide for more specific treatments. Future lines of research should further explore the full dimension of the psychopathology associated with JHS/hEDS to define the nature of the relationship. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "f31cbd5b8594e27b9aea23bdb2074a24",
"text": "The hyphenation algorithm of OpenOffice.org 2.0.2 is a generalization of TEX’s hyphenation algorithm that allows automatic non-standard hyphenation by competing standard and non-standard hyphenation patterns. With the suggested integration of linguistic tools for compound decomposition and word sense disambiguation, this algorithm would be able to do also more precise non-standard and standard hyphenation for several languages.",
"title": ""
},
{
"docid": "dc33e4c6352c885fb27e08fa1c310fb3",
"text": "Association rule mining algorithm is used to extract relevant information from database and transmit into simple and easiest form. Association rule mining is used in large set of data. It is used for mining frequent item sets in the database or in data warehouse. It is also one type of data mining procedure. In this paper some of the association rule mining algorithms such as apriori, partition, FP-growth, genetic algorithm etc., can be analyzed for generating frequent itemset in an effective manner. These association rule mining algorithms may differ depend upon their performance and effective pattern generation. So, this paper may concentrate on some of the algorithms used to generate efficient frequent itemset using some of association rule mining algorithms.",
"title": ""
},
{
"docid": "4ebdb9b35a9e70c04357f461efa6953f",
"text": "Bulk acoustic wave (BAW) filters operating at center frequency of 3.7GHz, comprising of BAW resonators utilizing single crystal aluminum nitride (AlN) piezoelectric films epitaxially grown on silicon carbide (SiC) substrates, are reported. Metal-organic chemical vapor deposition (MOCVD) growth was used to obtain single crystal AlN films on 150-mm diameter c-plane semi-insulating SiC substrates with (0004) X-ray diffraction (XRD) rocking curve full-width half-maximum (FWHM) of 0.025°. The fabricated filters (1.25×0.9 sq.mm) had a center frequency of 3.71GHz and a 3dB bandwidth of 100MHz, an insertion loss of 2.0dB and narrow band rejection of 40dB and out-of-band rejection in excess of 37dB to 8GHz. Individual resonators on the same wafer show an electro-mechanical coupling as high as 7.63% and maximum quality-factors up to 1572. Insertion loss of 5ohm resonators configured as individual 2-port devices changed by 0.15dB after high power survival test at 10W. This is the first demonstration of single crystal AlN-on-SiC based BAW resonator and filter technology at 3.7GHz and illustrates the potential of a single crystal AlN-on-SiC based BAW technology platform enabling compact, high power and high performance filter solutions for high frequency mobile, Wi-Fi and infrastructure applications.",
"title": ""
},
{
"docid": "76f033087b24fdb7494dd7271adbb346",
"text": "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Both approaches are still far from human-level performance.",
"title": ""
},
{
"docid": "43ff7d61119cc7b467c58c9c2e063196",
"text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fabcb243bff004279cfb5d522a7bed4b",
"text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.",
"title": ""
},
{
"docid": "27dae6cf20fc07ec2db43a82f4c3a285",
"text": "Web service composition enables seamless and dynamic integration of business applications on the web. The performance of the composed application is determined by the performance of the involved web services. Therefore, non-functional, quality of service aspects are crucial for selecting the web services to take part in the composition. Identifying the best candidate web services from a set of functionally-equivalent services is a multi-criteria decision making problem. The selected services should optimize the overall QoS of the composed application, while satisfying all the constraints specified by the client on individual QoS parameters. In this paper, we propose an approach based on the notion of skyline to effectively and efficiently select services for composition, reducing the number of candidate services to be considered. We also discuss how a provider can improve its service to become more competitive and increase its potential of being included in composite applications. We evaluate our approach experimentally using both real and synthetically generated datasets.",
"title": ""
},
{
"docid": "10c861ca1bdd7133d05f659efc7c9874",
"text": "Based on land use and land cover (LULC) datasets in the late 1970s, the early 1990s, 2004 and 2012, we analyzed characteristics of LULC change in the headwaters of the Yangtze River and Yellow River over the past 30 years contrastively, using the transition matrix and LULC change index. The results showed that, in 2012, the LULC in the headwaters of the Yellow River were different compared to those of the headwaters of the Yangtze River, with more grassland and wetand marshland. In the past 30 years, the grassland and wetand marshland increasing at the expense of sand, gobi, and bare land and desert were the main LULC change types in the headwaters of the Yangtze River, with the macro-ecological situation experiencing a process of degeneration, slight melioration, and continuous melioration, in that order. In the headwaters of the Yellow River, severe reduction of grassland coverage, shrinkage of wetand marshland and the consequential expansion of sand, gobi and bare land were noticed. The macro-ecological situation experienced a process of degeneration, obvious degeneration, and slight melioration, in that order, and the overall change in magnitude was more dramatic than that in the headwaters of the Yangtze River. These different LULC change courses were jointly driven by climate change, grassland-grazing pressure, and the implementation of ecological construction projects.",
"title": ""
},
{
"docid": "bad6560c8c769484a9ce213d0933923e",
"text": "Online support groups have drawn considerable attention from scholars in the past decades. While prior research has explored the interactions and motivations of users, we know relatively little about how culture shapes the way people use and understand online support groups. Drawing on ethnographic research in a Chinese online depression community, we examine how online support groups function in the context of Chinese culture for people with depression. Through online observations and interviews, we uncover the unique interactions among users in this online support group, such as peer diagnosis, peer therapy, and public journaling. These activities were intertwined with Chinese cultural values and the scarcity of mental health resources in China. We also show that online support groups play an important role in fostering individual empowerment and improving public understanding of depression in China. This paper provides insights into the interweaving of culture and online health community use and contributes to a context-rich understanding of online support groups.",
"title": ""
},
{
"docid": "ea0481f2841c203e16b3a323133ba904",
"text": "With more and more household objects built on planned obsolescence and consumed by a fast-growing population, hazardous waste recycling has become a critical challenge. Given the large variability of household waste, current recycling platforms mostly rely on human operators to analyze the scene, typically composed of many object instances piled up in bulk. Helping them by robotizing the unitary extraction is a key challenge to speed up this tedious process. Whereas supervised deep learning has proven very efficient for such object-level scene understanding, e.g ., generic object detection and segmentation in everyday scenes, it however requires large sets of per-pixel labeled images, that are hardly available for numerous application contexts, including industrial robotics. We thus propose a step towards a practical interactive application for generating an object-oriented robotic grasp, requiring as inputs only one depth map of the scene and one user click on the next object to extract. More precisely, we address in this paper the middle issue of object segmentation in top views of piles of bulk objects given a pixel location, namely seed, provided interactively by a human operator. We propose a two-fold framework for generating edge-driven instance segments. First, we repurpose a state-of-the-art fully convolutional object contour detector for seed-based instance segmentation by introducing the notion of edge-mask duality with a novel patch-free and contour-oriented loss function. Second, we train one model using only synthetic scenes, instead of manually labeled training data. Our experimental results show that considering edge-mask duality for training an encoder-decoder network, as we suggest, outperforms a state-of-the-art patch-based network in the present application context.",
"title": ""
},
{
"docid": "da7c8d0643e4fadee91188497d97b52a",
"text": "In current systems, memory accesses to a DRAM chip must obey a set of minimum latency restrictions specified in the DRAM standard. Such timing parameters exist to guarantee reliable operation. When deciding the timing parameters, DRAM manufacturers incorporate a very large margin as a provision against two worst-case scenarios. First, due to process variation, some outlier chips are much slower than others and cannot be operated as fast. Second, chips become slower at higher temperatures, and all chips need to operate reliably at the highest supported (i.e., worst-case) DRAM temperature (85° C). In this paper, we show that typical DRAM chips operating at typical temperatures (e.g., 55° C) are capable of providing a much smaller access latency, but are nevertheless forced to operate at the largest latency of the worst-case. Our goal in this paper is to exploit the extra margin that is built into the DRAM timing parameters to improve performance. Using an FPGA-based testing platform, we first characterize the extra margin for 115 DRAM modules from three major manufacturers. Our results demonstrate that it is possible to reduce four of the most critical timing parameters by a minimum/maximum of 17.3%/54.8% at 55°C without sacrificing correctness. Based on this characterization, we propose Adaptive-Latency DRAM (AL-DRAM), a mechanism that adoptively reduces the timing parameters for DRAM modules based on the current operating condition. AL-DRAM does not require any changes to the DRAM chip or its interface. We evaluate AL-DRAM on a real system that allows us to reconfigure the timing parameters at runtime. We show that AL-DRAM improves the performance of memory-intensive workloads by an average of 14% without introducing any errors. We discuss and show why AL-DRAM does not compromise reliability. We conclude that dynamically optimizing the DRAM timing parameters can reliably improve system performance.",
"title": ""
}
] |
scidocsrr
|
e8bbe717500b0fb201be13a68456ecd4
|
Understanding the Digital Marketing Environment with KPIs and Web Analytics
|
[
{
"docid": "0994065c757a88373a4d97e5facfee85",
"text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.",
"title": ""
}
] |
[
{
"docid": "76efa42a492d8eb36b82397e09159c30",
"text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.",
"title": ""
},
{
"docid": "1d26fc3a5f07e7ea678753e7171846c4",
"text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.",
"title": ""
},
{
"docid": "711daac04e27d0a413c99dd20f6f82e1",
"text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.",
"title": ""
},
{
"docid": "b93455e6b023910bf7711d56d16f62a2",
"text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.",
"title": ""
},
{
"docid": "6a8afd6713425e7dc047da08d7c4c773",
"text": "We present the first linear time (1 + /spl epsiv/)-approximation algorithm for the k-means problem for fixed k and /spl epsiv/. Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.",
"title": ""
},
{
"docid": "93133be6094bba6e939cef14a72fa610",
"text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.",
"title": ""
},
{
"docid": "3688c987419daade77c44912fbc72ecf",
"text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.",
"title": ""
},
{
"docid": "566a2b2ff835d10e0660fb89fd6ae618",
"text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).",
"title": ""
},
{
"docid": "72345bf404d21d0f7aa1e54a5710674c",
"text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.",
"title": ""
},
{
"docid": "b23d73e29fc205df97f073eb571a2b47",
"text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5cd726f49dd0cb94fe7d2d724da9f215",
"text": "We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.",
"title": ""
},
{
"docid": "dc18c0e5737b3d641418e5b33dd3f0e7",
"text": "Millimeter wave (mmWave) communications have recently attracted large research interest, since the huge available bandwidth can potentially lead to the rates of multiple gigabit per second per user. Though mmWave can be readily used in stationary scenarios, such as indoor hotspots or backhaul, it is challenging to use mmWave in mobile networks, where the transmitting/receiving nodes may be moving, channels may have a complicated structure, and the coordination among multiple nodes is difficult. To fully exploit the high potential rates of mmWave in mobile networks, lots of technical problems must be addressed. This paper presents a comprehensive survey of mmWave communications for future mobile networks (5G and beyond). We first summarize the recent channel measurement campaigns and modeling results. Then, we discuss in detail recent progresses in multiple input multiple output transceiver design for mmWave communications. After that, we provide an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity. Finally, the progresses in the standardization and deployment of mmWave for mobile networks are discussed.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "3e7941e6d2e5c2991030950d2a13d48f",
"text": "Mobile edge cloud (MEC) is a model for enabling on-demand elastic access to, or an interaction with a shared pool of reconfigurable computing resources such as servers, storage, peer devices, applications, and services, at the edge of the wireless network in close proximity to mobile users. It overcomes some obstacles of traditional central clouds by offering wireless network information and local context awareness as well as low latency and bandwidth conservation. This paper presents a comprehensive survey of MEC systems, including the concept, architectures, and technical enablers. First, the MEC applications are explored and classified based on different criteria, the service models and deployment scenarios are reviewed and categorized, and the factors influencing the MEC system design are discussed. Then, the architectures and designs of MEC systems are surveyed, and the technical issues, existing solutions, and approaches are presented. The open challenges and future research directions of MEC are further discussed.",
"title": ""
},
{
"docid": "8c662416784ddaf8dae387926ba0b17c",
"text": "Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations.",
"title": ""
},
{
"docid": "9f40a57159a06ecd9d658b4d07a326b5",
"text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011",
"title": ""
},
{
"docid": "4129d2906d3d3d96363ff0812c8be692",
"text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.",
"title": ""
},
{
"docid": "8e28f1561b3a362b2892d7afa8f2164c",
"text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.",
"title": ""
},
{
"docid": "acfdfe2de61ec2697ef865b1e5a42721",
"text": "Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem.",
"title": ""
}
] |
scidocsrr
|
ddc98d695bb751038bf54ef276b3033d
|
A peek into the future: predicting the evolution of popularity in user generated content
|
[
{
"docid": "8ab791e9db930fd27f6459e72a1687e5",
"text": "The problem of indexing time series has attracted much interest. Most algorithms used to index time series utilize the Euclidean distance or some variation thereof. However, it has been forcefully shown that the Euclidean distance is a very brittle distance measure. Dynamic time warping (DTW) is a much more robust distance measure for time series, allowing similar shapes to match even if they are out of phase in the time axis. Because of this flexibility, DTW is widely used in science, medicine, industry and finance. Unfortunately, however, DTW does not obey the triangular inequality and thus has resisted attempts at exact indexing. Instead, many researchers have introduced approximate indexing techniques or abandoned the idea of indexing and concentrated on speeding up sequential searches. In this work, we introduce a novel technique for the exact indexing of DTW. We prove that our method guarantees no false dismissals and we demonstrate its vast superiority over all competing approaches in the largest and most comprehensive set of time series indexing experiments ever undertaken.",
"title": ""
}
] |
[
{
"docid": "5f3816b9b53c9b0d38027fc6a81c07de",
"text": "The accelerating rate of scientific publications makes it difficult to find relevant citations or related work. Context-aware citation recommendation aims to solve this problem by providing a curated list of high-quality candidates given a short passage of text. Existing literature adopts bag-of-word representations leading to the loss of valuable semantics and lacks the ability to integrate metadata or generalize to unseen manuscripts in the training set. We propose a flexible encoder-decoder architecture called Neural Citation Network (NCN), embodying a robust representation of the citation context with a max time delay neural network, further augmented with an attention mechanism and author networks. The recurrent neural network decoder consults this representation when determining the optimal paper to recommend based solely on its title. Quantitative results on the large-scale CiteSeer dataset reveal NCN cultivates a significant improvement over competitive baselines. Qualitative evidence highlights the effectiveness of the proposed end-to-end neural network revealing a promising research direction for citation recommendation.",
"title": ""
},
{
"docid": "9b1cd0c567ba1d93f2d0ac8c72f0be9a",
"text": "The complexities of pediatric brain imaging have precluded studies that trace the neural development of cognitive skills acquired during childhood. Using a task that isolates reading-related brain activity and minimizes confounding performance effects, we carried out a cross-sectional functional magnetic resonance imaging (fMRI) study using subjects whose ages ranged from 6 to 22 years. We found that learning to read is associated with two patterns of change in brain activity: increased activity in left-hemisphere middle temporal and inferior frontal gyri and decreased activity in right inferotemporal cortical areas. Activity in the left-posterior superior temporal sulcus of the youngest readers was associated with the maturation of their phonological processing abilities. These findings inform current reading models and provide strong support for Orton's 1925 theory of reading development.",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "74dead8ad89ae4a55105fb7ae95d3e20",
"text": "Improved health is one of the many reasons people choose to adopt a vegetarian diet, and there is now a wealth of evidence to support the health benefi ts of a vegetarian diet. Abstract: There is now a significant amount of research that demonstrates the health benefits of vegetarian and plant-based diets, which have been associated with a reduced risk of obesity, diabetes, heart disease, and some types of cancer as well as increased longevity. Vegetarian diets are typically lower in fat, particularly saturated fat, and higher in dietary fiber. They are also likely to include more whole grains, legumes, nuts, and soy protein, and together with the absence of red meat, this type of eating plan may provide many benefits for the prevention and treatment of obesity and chronic health problems, including diabetes and cardiovascular disease. Although a well-planned vegetarian or vegan diet can meet all the nutritional needs of an individual, it may be necessary to pay particular attention to some nutrients to ensure an adequate intake, particularly if the person is on a vegan diet. This article will review the evidence for the health benefits of a vegetarian diet and also discuss strategies for meeting the nutritional needs of those following a vegetarian or plant-based eating pattern.",
"title": ""
},
{
"docid": "511cbf0bc1ddc925a2b7b6deaa752912",
"text": "This paper is about long term navigation in dynamic environments. In previous work we introduced a framework which stored distinct visual appearances of a workspace, known as experiences. These are used to improve localisation on future visits. In this work we introduce a new introspective process, executed between sorties, thats aims by careful discovery of the relationships between experiences, to further improve the performance of our system. We evaluate our new approach on 37km of stereo data captured over a three month period.",
"title": ""
},
{
"docid": "090887c325fa3bf3ed928011f6b14c72",
"text": "R apid advances in electronic networks and computerbased information systems have given us enormous capabilities to process, store, and transmit digital data in most business sectors. This has transformed the way we conduct trade, deliver government services, and provide health care. Changes in communication and information technologies and particularly their confluence has raised a number of concerns connected with the protection of organizational information assets. Achieving consensus regarding safeguards for an information system, among different stakeholders in an organization, has become more difficult than solving many technical problems that might arise. This “Technical Opinion” focuses on understanding the nature of information security in the next millennium. Based on this understanding it suggests a set of principles that would help in managing information security in the future.",
"title": ""
},
{
"docid": "fc9fe094b3e46a85b7564a89730347fd",
"text": "We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.",
"title": ""
},
{
"docid": "868b55cc5b83ea6997000aa6aab84128",
"text": "Job boards and professional social networks heavily use recommender systems in order to better support users in exploring job advertisements. Detecting the similarity between job advertisements is important for job recommendation systems as it allows, for example, the application of item-to-item based recommendations. In this work, we research the usage of dense vector representations to enhance a large-scale job recommendation system and to rank German job advertisements regarding their similarity. We follow a two-folded evaluation scheme: (1) we exploit historic user interactions to automatically create a dataset of similar jobs that enables an offline evaluation. (2) In addition, we conduct an online A/B test and evaluate the best performing method on our platform reaching more than 1 million users. We achieve the best results by combining job titles with full-text job descriptions. In particular, this method builds dense document representation using words of the titles to weigh the importance of words of the full-text description. In the online evaluation, this approach allows us to increase the click-through rate on job recommendations for active users by 8.0%.",
"title": ""
},
{
"docid": "dd6b922a2cced45284cd1c67ad3be247",
"text": "Today’s interconnected socio-economic and environmental challenges require the combination and reuse of existing integrated modelling solutions. This paper contributes to this overall research area, by reviewing a wide range of currently available frameworks, systems and emerging technologies for integrated modelling in the environmental sciences. Based on a systematic review of the literature, we group related studies and papers into viewpoints and elaborate on shared and diverging characteristics. Our analysis shows that component-based modelling frameworks and scientific workflow systems have been traditionally used for solving technical integration challenges, but ultimately, the appropriate framework or system strongly depends on the particular environmental phenomenon under investigation. The study also shows that in general individual integrated modelling solutions do not benefit from components and models that are provided by others. It is this island (or silo) situation, which results in low levels of model reuse for multi-disciplinary settings. This seems mainly due to the fact that the field as such is highly complex and diverse. A unique integrated modelling solution, which is capable of dealing with any environmental scenario, seems to be unaffordable because of the great variety of data formats, models, environmental phenomena, stakeholder networks, user perspectives and social aspects. Nevertheless, we conclude that the combination of modelling tools, which address complementary viewpoints such as service-based combined with scientific workflow systems, or resource-modelling on top of virtual research environments could lead to sustainable information systems, which would advance model sharing, reuse and integration. Next steps for improving this form of multi-disciplinary interoperability are sketched.",
"title": ""
},
{
"docid": "60eec67cd3b60258a6b3179c33279a22",
"text": "We present a new efficient edge-preserving filter-“tree filter”-to achieve strong image smoothing. The proposed filter can smooth out high-contrast details while preserving major edges, which is not achievable for bilateral-filter-like techniques. Tree filter is a weighted-average filter, whose kernel is derived by viewing pixel affinity in a probabilistic framework simultaneously considering pixel spatial distance, color/intensity difference, as well as connectedness. Pixel connectedness is acquired by treating pixels as nodes in a minimum spanning tree (MST) extracted from the image. The fact that an MST makes all image pixels connected through the tree endues the filter with the power to smooth out high-contrast, fine-scale details while preserving major image structures, since pixels in small isolated region will be closely connected to surrounding majority pixels through the tree, while pixels inside large homogeneous region will be automatically dragged away from pixels outside the region. The tree filter can be separated into two other filters, both of which turn out to have fast algorithms. We also propose an efficient linear time MST extraction algorithm to further improve the whole filtering speed. The algorithms give tree filter a great advantage in low computational complexity (linear to number of image pixels) and fast speed: it can process a 1-megapixel 8-bit image at ~ 0.25 s on an Intel 3.4 GHz Core i7 CPU (including the construction of MST). The proposed tree filter is demonstrated on a variety of applications.",
"title": ""
},
{
"docid": "a299b0f58aaba6efff9361ff2b5a1e69",
"text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study",
"title": ""
},
{
"docid": "acc6e1effff63fa8fdd9b794454e6817",
"text": "Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.",
"title": ""
},
{
"docid": "3fbbe02ff11faa5cf6d537d5bcb0e658",
"text": "This paper reports on a mixed-method research project that examined the attitudes of computer users toward accidental/naive information security (InfoSec) behaviour. The aim of this research was to investigate the extent to which attitude data elicited from repertory grid technique (RGT) interviewees support their responses collected via an online survey questionnaire. Twenty five university students participated in this two-stage project. Individual attitude scores were calculated for each of the research methods and were compared across seven behavioural focus areas using Spearman product-moment correlation coefficient. The two sets of data exhibited a small-to-medium correlation when individual attitudes were analysed for each of the focus areas. In summary, this exploratory research indicated that the two research approaches were reasonably complementary and the RGT interview results tended to triangulate the attitude scores derived from the online survey questionnaire, particularly in regard to attitudes toward Incident Reporting behaviour, Email Use behaviour and Social Networking Site Use behaviour. The results also highlighted some attitude items in the online questionnaire that need to be reviewed for clarity, relevance and non-ambiguity.",
"title": ""
},
{
"docid": "71d5fba169222eaab6a7fcb5a7417c90",
"text": "Melanoma is amongst most aggressive types of cancer. However, it is highly curable if detected in its early stages. Prescreening of suspicious moles and lesions for malignancy is of great importance. Detection can be done by images captured by standard cameras, which are more preferable due to low cost and availability. One important step in computerized evaluation of skin lesions is accurate detection of lesion’s region, i.e. segmentation of an image into two regions as lesion and normal skin. Accurate segmentation can be challenging due to burdens such as illumination variation and low contrast between lesion and healthy skin. In this paper, a method based on deep neural networks is proposed for accurate extraction of a lesion region. The input image is preprocessed and then its patches are fed to a convolutional neural network (CNN). Local texture and global structure of the patches are processed in order to assign pixels to lesion or normal classes. A method for effective selection of training patches is used for more accurate detection of a lesion’s border. The output segmentation mask is refined by some post processing operations. The experimental results of qualitative and quantitative evaluations demonstrate that our method can outperform other state-of-the-art algorithms exist in the literature.",
"title": ""
},
{
"docid": "6d380dc3fe08d117c090120b3398157b",
"text": "Conversational interfaces are likely to become more efficient, intuitive and engaging way for human-computer interaction than today’s text or touch-based interfaces. Current research efforts concerning conversational interfaces focus primarily on question answering functionality, thereby neglecting support for search activities beyond targeted information lookup. Users engage in exploratory search when they are unfamiliar with the domain of their goal, unsure about the ways to achieve their goals, or unsure about their goals in the first place. Exploratory search is often supported by approaches from information visualization. However, such approaches cannot be directly translated to the setting of conversational search. In this paper we investigate the affordances of interactive storytelling as a tool to enable exploratory search within the framework of a conversational interface. Interactive storytelling provides a way to navigate a document collection in the pace and order a user prefers. In our vision, interactive storytelling is to be coupled with a dialogue-based system that provides verbal explanations and responsive design. We discuss challenges and sketch the research agenda required to put this vision into life.",
"title": ""
},
{
"docid": "914c985dc02edd09f0ee27b75ecee6a4",
"text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.",
"title": ""
},
{
"docid": "c4490ecc0b0fb0641dc41313d93ccf44",
"text": "Machine learning predictive modeling algorithms are governed by “hyperparameters” that have no clear defaults agreeable to a wide range of applications. The depth of a decision tree, number of trees in a forest, number of hidden layers and neurons in each layer in a neural network, and degree of regularization to prevent overfitting are a few examples of quantities that must be prescribed for these algorithms. Not only do ideal settings for the hyperparameters dictate the performance of the training process, but more importantly they govern the quality of the resulting predictive models. Recent efforts to move from a manual or random adjustment of these parameters include rough grid search and intelligent numerical optimization strategies. This paper presents an automatic tuning implementation that uses local search optimization for tuning hyperparameters of modeling algorithms in SAS® Visual Data Mining and Machine Learning. The AUTOTUNE statement in the TREESPLIT, FOREST, GRADBOOST, NNET, SVMACHINE, and FACTMAC procedures defines tunable parameters, default ranges, user overrides, and validation schemes to avoid overfitting. Given the inherent expense of training numerous candidate models, the paper addresses efficient distributed and parallel paradigms for training and tuning models on the SAS® ViyaTM platform. It also presents sample tuning results that demonstrate improved model accuracy and offers recommendations for efficient and effective model tuning.",
"title": ""
},
{
"docid": "2296eaed4239fe7a3bbac83ec92b2f5d",
"text": "This paper presents a novel capacitive type joint torque sensor for robotic applications. The proposed torque sensor enables to measure the rotational torque value while it decouples the external Force/Torque loads without any complicated computing procedure. To measure the joint torque, just two capacitive transducer cells are used. These two cells are located in the opposite sides of each other, which compensates the cross couplings of the torques when external Force/Torque loads are applied. To simply the manufacturing process, the proposed sensor is designed to be composed of three plate-shaped parts and a single printed circuit board (PCB). Lastly, the developed torque sensor is manufactured and its performances are experimentally demonstrated.",
"title": ""
},
{
"docid": "19548ee85a25f7536783e480e6d80b3b",
"text": "A family of two-phase interleaved LLC (iLLC) resonant converter with hybrid rectifier is proposed for wide output voltage range applications. The primary sides of the two LLC converters are in parallel, and the connection of the secondary windings in the two LLC converters can be regulated by the hybrid rectifier according to the output voltage. Variable frequency control is employed to regulate the output voltage and the secondary windings are in series when the output voltage is high. Fixed-frequency phase-shift control is adopted to regulate the configuration of the secondary windings as well as the output voltage when the output voltage is low. The output voltage range is extended by adaptively changing the configuration of the hybrid rectifier, which results in reduced switching frequency range, circulating current, and conduction losses of the LLC resonant tank. Zero voltage switching and zero current switching are achieved for all the active switches and diodes, respectively, within the entire operation range. The operation principles are analyzed and a 3.5 kW prototype with 400 V input voltage and 150–500 V output voltage is built and tested to evaluate the feasibility of the proposed method.",
"title": ""
},
{
"docid": "e5175084f08ad8efc3244f52cbb8ef7b",
"text": "We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.",
"title": ""
}
] |
scidocsrr
|
b4e8f32a4ebc44ece89ced8913dbb03c
|
Instance Weighting for Neural Machine Translation Domain Adaptation
|
[
{
"docid": "3355c37593ee9ef1b2ab29823ca8c1d4",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "daec5e8f1be6bc9e6217ded92897697b",
"text": "Although new corpora are becoming increasingly available for machine translation, only those that belong to the same or similar domains are typically able to improve translation performance. Recently Neural Machine Translation (NMT) has become prominent in the field. However, most of the existing domain adaptation methods only focus on phrase-based machine translation. In this paper, we exploit the NMT’s internal embedding of the source sentence and use the sentence embedding similarity to select the sentences which are close to in-domain data. The empirical adaptation results on the IWSLT English-French and NIST Chinese-English tasks show that the proposed methods can substantially improve NMT performance by 2.4-9.0 BLEU points, outperforming the existing state-of-the-art baseline by 2.3-4.5 BLEU points.",
"title": ""
}
] |
[
{
"docid": "9e4e60c7e2dfb5afd077f479a74c17c0",
"text": "This paper presents a completely data-driven and machine-learning-based approach, in two stages, to first characterize and then forecast hourly water demand in the short term with applications of two different data sources: urban water demand (SCADA data) and individual customer water consumption (AMR data). In the first case, reliable forecasting can be used to optimize operations, particularly the pumping schedule, in order to reduce energy-related costs, while in the second case, the comparison between forecast and actual values may support the online detection of anomalies, such as smart meter faults, fraud or possible cyber-physical attacks. Results are presented for a real case: the water distribution network in Milan.",
"title": ""
},
{
"docid": "01d8f6e022099977bdcf92ee5735e11d",
"text": "We present a novel deep learning based image inpainting system to complete images with free-form masks and inputs. e system is based on gated convolutions learned from millions of images without additional labelling efforts. e proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shapes, global and local GANs designed for a single rectangular mask are not suitable. To this end, we also present a novel GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminators on dense image patches. It is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more exible results than previous methods. We show that our system helps users quickly remove distracting objects, modify image layouts, clear watermarks, edit faces and interactively create novel objects in images. Furthermore, visualization of learned feature representations reveals the eectiveness of gated convolution and provides an interpretation of how the proposed neural network lls in missing regions. More high-resolution results and video materials are available at hp://jiahuiyu.com/deepll2.",
"title": ""
},
{
"docid": "77c72fe890aa1479fc6cd5d6737bcde3",
"text": "Since smartphones have stored diverse sensitive privacy information, including credit card and so on, a great deal of malware are desired to tamper them. As one of the most prevalent platforms, Android contains sensitive resources that can only be accessed via corresponding APIs, and the APIs can be invoked only when user has authorized permissions in the Android permission model. However, a novel threat called privilege escalation attack may bypass this watchdog. It's presented as that an application with less permissions can access sensitive resources through public interfaces of a more privileged application, which is especially useful for malware to hide sensitive functions by dispersing them into multiple programs. We explore privilege-escalation malware evolution techniques on samples from Android Malware Genome Project. And they have showed great effectiveness against a set of powerful antivirus tools provided by VirusTotal. The detection ratios present different and distinguished reduction, compared to an average 61% detection ratio before transformation. In order to conquer this threat model, we have developed a tool called DroidAlarm to conduct a full-spectrum analysis for identifying potential capability leaks and present concrete capability leak paths by static analysis on Android applications. And we can still alarm all these cases by exposing capability leak paths in them.",
"title": ""
},
{
"docid": "2c69729c72935eae8889843f9aee5f6b",
"text": "Some students, for a variety of factors, struggle to complete high school on time. To address this problem, school districts across the U.S. use intervention programs to help struggling students get back on track academically. Yet in order to best apply those programs, schools need to identify off-track students as early as possible and enroll them in the most appropriate intervention. Unfortunately, identifying and prioritizing students in need of intervention remains a challenging task. This paper describes work that builds on current systems by using advanced data science methods to produce an extensible and scalable predictive framework for providing partner U.S. public school districts with individual early warning indicator systems. Our framework employs machine learning techniques to identify struggling students and describe features that are useful for this task, evaluating these techniques using metrics important to school administrators. By doing so, our framework, developed with the common need of several school districts in mind, provides a common set of tools for identifying struggling students and the factors associated with their struggles. Further, by integrating data from disparate districts into a common system, our framework enables cross-district analyses to investigate common early warning indicators not just within a single school or district, but across the U.S. and beyond.",
"title": ""
},
{
"docid": "fa03a0640ada358378f1b4915aa68be2",
"text": "Recent evidence suggests that there are two possible systems for empathy: a basic emotional contagion system and a more advanced cognitive perspective-taking system. However, it is not clear whether these two systems are part of a single interacting empathy system or whether they are independent. Additionally, the neuroanatomical bases of these systems are largely unknown. In this study, we tested the hypothesis that emotional empathic abilities (involving the mirror neuron system) are distinct from those related to cognitive empathy and that the two depend on separate anatomical substrates. Subjects with lesions in the ventromedial prefrontal (VM) or inferior frontal gyrus (IFG) cortices and two control groups were assessed with measures of empathy that incorporate both cognitive and affective dimensions. The findings reveal a remarkable behavioural and anatomic double dissociation between deficits in cognitive empathy (VM) and emotional empathy (IFG). Furthermore, precise anatomical mapping of lesions revealed Brodmann area 44 to be critical for emotional empathy while areas 11 and 10 were found necessary for cognitive empathy. These findings are consistent with these cortices being different in terms of synaptic hierarchy and phylogenetic age. The pattern of empathy deficits among patients with VM and IFG lesions represents a first direct evidence of a double dissociation between emotional and cognitive empathy using the lesion method.",
"title": ""
},
{
"docid": "0ae071bc719fdaac34a59991e66ab2b8",
"text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.",
"title": ""
},
{
"docid": "950a0c8a41823fcc93de771309c1a055",
"text": "This article describes a low vision rehabilitation program operating within a hospital-based outpatient rehabilitation clinic. The program uses a team approach combining ophthalmology and occupational therapy services. Patients are referred to the program by their primary care physician for a low vision evaluation completed jointly by the ophthalmologist and occupational therapist. The ophthalmology portion of the evaluation includes assessment of visual acuity, contrast sensitivity function, and macular perimetry with a scanning laser ophthalmoscope. The occupational therapy evaluation focuses on assessing the functional limitations experienced by the patient due to the vision loss and determining how the patient is best able to use remaining vision to complete daily activities. Occupational therapy treatment emphasizes training the patient to use remaining vision as efficiently and effectively as possible to complete daily activities and includes training in use of optical devices. Because of the specialized nature of the service provided, additional postgraduate preparation is needed to enable occupational therapists to provide effective low vision rehabilitation.",
"title": ""
},
{
"docid": "27faf9d6e9f62de9e86794427420cb5d",
"text": "This paper explores time variation in bond risk, as measured by the covariation of bond returns with stock returns and with consumption growth, and in the volatility of bond returns. A robust stylized fact in empirical finance is that the spread between the yield on long-term bonds and short-term bonds forecasts positively future excess returns on bonds at varying horizons, and that the short-term nominal interest rate forecasts positively stock return volatility and exchange rate volatility. This paper presents evidence that movements in both the short-term nominal interest rate and the yield spread are positively related to changes in subsequent realized bond risk and bond return volatility. The yield spread appears to proxy for business conditions, while the short rate appears to proxy for inflation and economic uncertainty. A decomposition of bond betas into a real cash flow risk component, and a discount rate risk component shows that yield spreads have offsetting effects in each component. A widening yield spread is correlated with reduced cash-flow (or inflationary) risk for bonds, but it is also correlated with larger discount rate risk for bonds. The short rate forecasts only the discount rate component of bond beta. JEL classification: G12. 1Graduate School of Business Administration, Baker Libray 367, Harvard University, Boston MA 02163, USA, CEPR, and NBER. Email lviceira@hbs.edu. Website http://www.people.hbs.edu/lviceira/. I am grateful to John Campbell, Jakub Jurek, André Perold, participants in the Lisbon International Workshop on the Predictability of Financial Markets, and especially to two anonymous referees and Andréas Heinen for helpful comments and suggestions. I am also very grateful to Johnny Kang for exceptionally able research assistance. I acknowledge the Division of Research at the Harvard Business School for their generous financial support.",
"title": ""
},
{
"docid": "e2f878f2ecc62bdbaa5e578f8a2b6be5",
"text": "A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for randomness in practice.",
"title": ""
},
{
"docid": "8c072981fd0b949f54a39c043dfb75ce",
"text": "Several studies in the literature have shown that the words people use are indicative of their psychological states. In particular, depression was found to be associated with distinctive linguistic patterns. In this talk, I will describe our first steps to try to identify as early as possible if the writer is developing a depressive state. I will detail the methodology we have adopted to build and make publicly available a test collection on depression and language use. The resulting corpus includes a series of textual interactions written by different subjects. The new collection not only encourages research on differences in language between depressed and non-depressed individuals, but also on the evolution of the language use of depressed individuals. I will also present the new CLEF lab that we will run next year on this topic (eRisk 2017), that includes a novel depression detection task and the proposal of effectiveness measure to systematically compare early detection algorithms and baseline results. Bio : Fabio Crestani est titulaire d'un diplôme en statistiques de l'Université de Padoue (Italie) et d'une maîtrise et d'un doctorat en sciences informatiques de l'Université de Glasgow (Royaume-Uni). Ses principaux domaines de recherche sont la recherche d'information, la fouille de textes et les bibliothèques numériques. Il a co-édité 10 livres et publié plus de 160 publications dans ces domaines de recherche. Il a été rédacteur en chef de Information Processing and Management (Elsevier) jusqu'en 2015 et membre du comité de rédaction de plusieurs revues. Ses travaux sur les réseaux sociaux sont particulièrement en phase avec les recherches menées par plusieurs équipes de l'IRIT.",
"title": ""
},
{
"docid": "29db0699c332efd2d2dd1612defab65c",
"text": "Denial of Service (DoS) attacks are important topics for security courses that teach ethical hacking techniques and intrusion detection. This paper presents a case study of the implementation of comprehensive offensive hands-on lab exercises about three common DoS attacks. The exercises teach students how to perform practically the DoS attacks in an isolated network laboratory environment. The paper discuses also some ethical and legal issues related to teaching ethical hacking, and then lists steps that schools and educators should take to improve the chances of having a successful and problem free information security programs.",
"title": ""
},
{
"docid": "025076c60f680a6e7311f07b3027b13c",
"text": "The changing nature of warfare has seen a paradigm shift from the conventional to asymmetric, contactless warfare such as information and cyber warfare. Excessive dependence on information and communication technologies, cloud infrastructures, big data analytics, data-mining and automation in decision making poses grave threats to business and economy in adversarial environments. Adversarial machine learning is a fast growing area of research which studies the design of Machine Learning algorithms that are robust in adversarial environments. This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. We present privacy issues in these models and describe a cyber-warfare test-bed to test the effectiveness of the various attack-defence strategies and conclude with some open problems in this area of research.",
"title": ""
},
{
"docid": "b7e7d9813304c8616fc7055c8a5d832f",
"text": "C. F. Dormann (carsten.dormann@biom.uni-freiburg.de), B. Gruber and S. Lautenbach, Helmholtz Centre for Environmental Research-UFZ, Dept of Computational Landscape Ecology, Permoserstr. 15, DE-04318 Leipzig, Germany. CFD also at: Biometry and Environmental System Analysis, Tennenbacher Stra ß e 4, Univ. Freiburg, DE-79085 Freiburg, Germany. BG also at: Inst. for Applied Ecology, Faculty of Applied Sciences, Univ. of Canberra, ACT 2601, Australia. SL also at: Univ. of Bonn, Inst. of Geodesy and Geoinformation, Dept Urban Planning and Real Estate Management, Nussallee 1, DE-53115 Bonn, Germany. – J. Elith, School of Botany, Th e Univ. of Melbourne, Parkville, VIC 3010, Australia. – S. Bacher, Univ. of Fribourg, Dept of Biology, Unit of Ecology and Evolution, Chemin du Mus é e 10, CH-1700 Fribourg, Switzerland. – C. Buchmann, Univ. of Potsdam, Plant Ecology and Nature Conservation, Maulbeerallee 2, DE-14469 Potsdam, Germany. – G. Carl, Helmholtz Centre for Environmental Research-UFZ, Dept of Community Ecology, Th eodor-Lieser-Str. 4, DE-06120 Halle, Germany. – G. Carr é , Inra Paca, Domaine Saint-Paul, Site Agroparc, FR-84914 Avignon, France. – B. Lafourcade and T. M ü nkem ü ller, Laboratoire d’Ecologie Alpine, UMR-CNRS 5553, Univ. J. Fourier, BP 53, FR-38041 Grenoble Cedex 9, France. – J. R. Garc í a Marqu é z, Senckenberg Research Inst. and Natural History Museum, Biodiversity and Climate Research Centre (LOEWE Bik-F), Senckenberganlage 25, DE-60325 Frankfurt/Main, Germany. – P. J. Leit ã o, Geomatics Lab, Geography Dept, Humboldt-Univ. Berlin, Rudower Chaussee 16, DE-12489 BerlinAdlershof, Germany. PJL also at: Centre for Applied Ecology, Inst. of Agronomy, Technical Univ. of Lisbon, Tapada da Ajuda, PT-1349 -017 Lisboa, Portugal. – C. McClean, Environment Dept, Univ. of York, Heslington, York YO10 5DD, UK. – P. E. Osborne, Centre for Environmental Sciences, Faculty of Engineering and the Environment, Univ. of Southampton, Highfi eld, Southampton SO17 1BJ, UK. – B. Reineking, Biogeographical Modelling, BayCEER, Univ. of Bayreuth, Universit ä tsstr. 30, DE-95447 Bayreuth, Germany. – B. Schr ö der, Inst. of Earth and Environmental Sciences, Univ. of Potsdam, Karl-Liebknecht-Str. 24/25, DE-14476 Potsdam, Germany. BS also at: Landscape Ecology, Technische Univ. München, Emil-Ramann-Str. 6, DE-85354 Freising, Germany. – A. K. Skidmore, ITC, Univ. of Twente, PO Box 217, NL-7000 AE Enschede, the Netherlands. – D. Zurell, Univ. of Potsdam, Plant Ecology and Nature Conservation, Maulbeerallee 2, DE-14469 Potsdam, Germany. DZ also at: Inst. of Earth and Environmental Sciences, Univ. of Potsdam, Karl-Liebknecht-Str. 24/25, DE-14476 Potsdam, Germany.",
"title": ""
},
{
"docid": "72c06d3c033c6a7c90a24256612ef3ae",
"text": "Constrained State-space Model Predictive Control is presented in the paper. Predictive controller based on incremental linear state-space process model and quadratic criterion is derived. Typical types of constraints are considered – limits on manipulated, state and controlled variables. Control experiments with nonlinear model of multivariable laboratory process are simulated first and real experiment is realized afterwards.",
"title": ""
},
{
"docid": "aa84af0f609f2593e4e8c33d3f2bd91c",
"text": "Massively Multiplayer Online Role Playing Games (MMORPGs) create large virtual communities. Online gaming shows potential not just for entertaining, but also for education. The aim of this research project is to investigate the use of commercial MMORPGs to support second language teaching. MMORPGs offer a digital safe space in which students can communicate by using their target language with global players. This qualitative research based on ethnography and action research investigates the students’ experiences of language learning and performing while they play in the MMORPGs. Research was conducted in both the ‘real’ and ‘virtual’ worlds. In the real world the researcher observes the interaction with the MMORPGs by the students through actual discussion, and screen video captures while they are playing. In the virtual world, the researcher takes on the role of a character in the MMORPG enabling the researcher to get an inside point of view of the students and their own MMORPG characters. This latter approach also uses action research to allow the researcher to provide anonymous/private support to the students including in-game instruction, confidence building, and some support of language issues in a safe and friendly way. Using action research with MMORPGs in the real world facilitates a number of opportunities for learning and teaching including opportunities to practice language and individual and group experiences of communicating with other native/ second language speakers for the students. The researcher can also develop tutorial exercises and discussion for teaching plans based on the students’ experiences with the MMORPGs. The results from this research study demonstrate that MMORPGs offer a safe, fun, informal and effective learning space for supporting language teaching. Furthermore the use of MMORPGs help the students’ confidence in using their second language and provide additional benefits such as a better understanding of the culture and use of language in different contexts.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "cc5b1a8100e8d4d7be5dfb80c4866aab",
"text": "A fundamental characteristic of multicellular organisms is the specialization of functional cell types through the process of differentiation. These specialized cell types not only characterize the normal functioning of different organs and tissues, they can also be used as cellular biomarkers of a variety of different disease states and therapeutic/vaccine responses. In order to serve as a reference for cell type representation, the Cell Ontology has been developed to provide a standard nomenclature of defined cell types for comparative analysis and biomarker discovery. Historically, these cell types have been defined based on unique cellular shapes and structures, anatomic locations, and marker protein expression. However, we are now experiencing a revolution in cellular characterization resulting from the application of new high-throughput, high-content cytometry and sequencing technologies. The resulting explosion in the number of distinct cell types being identified is challenging the current paradigm for cell type definition in the Cell Ontology. In this paper, we provide examples of state-of-the-art cellular biomarker characterization using high-content cytometry and single cell RNA sequencing, and present strategies for standardized cell type representations based on the data outputs from these cutting-edge technologies, including “context annotations” in the form of standardized experiment metadata about the specimen source analyzed and marker genes that serve as the most useful features in machine learning-based cell type classification models. We also propose a statistical strategy for comparing new experiment data to these standardized cell type representations. The advent of high-throughput/high-content single cell technologies is leading to an explosion in the number of distinct cell types being identified. It will be critical for the bioinformatics community to develop and adopt data standard conventions that will be compatible with these new technologies and support the data representation needs of the research community. The proposals enumerated here will serve as a useful starting point to address these challenges.",
"title": ""
},
{
"docid": "9bb00052fbd3f7306f1b1f32370c454e",
"text": "Modeling a collection of similar regression or classification tasks can be improved by making the tasks ‘learn from each other’. In machine learning, this subject is approached through ‘multitask learning’, where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between ‘fixed effects’, which are the same for all tasks, and ‘random effects’, which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two realworld problems, one a school problem and the other involving single-copy newspaper sales.",
"title": ""
},
{
"docid": "ea2a276dd7b3a1f99b77ad3b8666d59a",
"text": "Nowadays, it is very common for one person to be in different social networks. Linking identical users across different social networks, also known as the User Identity Linkage (UIL) problem, is fundamental for many applications. There are two major challenges in the UIL problem. First, it’s extremely expensive to collect manually linked user pairs as training data. Second, the user attributes in different networks are usually defined and formatted very differently which makes attribute alignment very hard. In this paper we propose CoLink, a general unsupervised framework for the UIL problem. CoLink employs a co-training algorithm, which manipulates two independent models, the attribute-based model and the relationship-based model, and makes them reinforce each other iteratively in an unsupervised way. We also propose the sequence-to-sequence learning as a very effective implementation of the attribute-based model, which can well handle the challenge of the attribute alignment by treating it as a machine translation problem. We apply CoLink to a UIL task of mapping the employees in an enterprise network to their LinkedIn profiles. The experiment results show that CoLink generally outperforms the state-of-the-art unsupervised approaches by an F1 increase over 20%.",
"title": ""
}
] |
scidocsrr
|
957bd9c647fc04f4bec7e4ecf3b6f048
|
Distributed Federated Learning for Ultra-Reliable Low-Latency Vehicular Communications
|
[
{
"docid": "f69e0ee2fa795e020c36dd3389ce93da",
"text": "Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.",
"title": ""
},
{
"docid": "244b583ff4ac48127edfce77bc39e768",
"text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.",
"title": ""
},
{
"docid": "ea87bfc0d6086e367e8950b445529409",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
}
] |
[
{
"docid": "9fd2ec184fa051070466f61845e6df60",
"text": "Buildings across the world contribute significantly to the overall energy consumption and are thus stakeholders in grid operations. Towards the development of a smart grid, utilities and governments across the world are encouraging smart meter deployments. High resolution (often at every 15 minutes) data from these smart meters can be used to understand and optimize energy consumptions in buildings. In addition to smart meters, buildings are also increasingly managed with Building Management Systems (BMS) which control different sub-systems such as lighting and heating, ventilation, and air conditioning (HVAC). With the advent of these smart meters, increased usage of BMS and easy availability and widespread installation of ambient sensors, there is a deluge of building energy data. This data has been leveraged for a variety of applications such as demand response, appliance fault detection and optimizing HVAC schedules. Beyond the traditional use of such data sets, they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency. Effective use of this data entails several critical areas from sensing to decision making and participatory involvement of occupants. Picking from wide literature in building energy efficiency, we identify five crust areas (also referred to as 5 Is) for realizing data driven energy efficiency in buildings : i) instrument optimally; ii) interconnect sub-systems; iii) inferred decision making; iv) involve occupants and v) intelligent operations. We classify prior work as per these 5 Is and discuss challenges, opportunities and applications across them. Building upon these 5 Is we discuss a well studied problem in building energy efficiency non-intrusive load monitoring (NILM) and how research in this area spans across the 5 Is.",
"title": ""
},
{
"docid": "c8e5257c2ed0023dc10786a3071c6e6a",
"text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"title": ""
},
{
"docid": "ccb6da03ae9520de4843082ac0583978",
"text": "Zero-shot learning (ZSL) aims to recognize unseen image categories by learning an embedding space between image and semantic representations. For years, among existing works, it has been the center task to learn the proper mapping matrices aligning the visual and semantic space, whilst the importance to learn discriminative representations for ZSL is ignored. In this work, we retrospect existing methods and demonstrate the necessity to learn discriminative representations for both visual and semantic instances of ZSL. We propose an end-to-end network that is capable of 1) automatically discovering discriminative regions by a zoom network; and 2) learning discriminative semantic representations in an augmented space introduced for both user-defined and latent attributes. Our proposed method is tested extensively on two challenging ZSL datasets, and the experiment results show that the proposed method significantly outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "27b2f82780c4113bb8a234cac0cf38f9",
"text": "Conventional robot manipulators have singularities in their workspaces and constrained spatial movements. Flexible and soft robots provide a unique solution to overcome this limitation. Flexible robot arms have biologically inspired characteristics as flexible limbs and redundant degrees of freedom. From these special characteristics, flexible manipulators are able to develop abilities such as bend, stretch and adjusting stiffness to traverse a complex maze. Many researchers are working to improve capabilities of flexible arms by improving the number of degrees of freedoms and their methodologies. The proposed flexible robot arm is composed of multiple sections and each section contains three similar segments and a base segment. These segments act as the backbone of the basic structure and each section can be controlled by changing the length of three control wires. These control wires pass through each segment and are held in place by springs. This design provides each segment with 2 DOF. The proposed system single section can be bent 90° with respective to its centre axis. Kinematics of the flexible robot is derived with respect to the base segment.",
"title": ""
},
{
"docid": "ba391ddf37a4757bc9b8d9f4465a66dc",
"text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.",
"title": ""
},
{
"docid": "38a74fff83d3784c892230255943ee23",
"text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.",
"title": ""
},
{
"docid": "9341757e2403b6fd63738f8ec0d33a15",
"text": "The objective of this study was to review the literature with respect to the root and canal systems in the maxillary first molar. Root anatomy studies were divided into laboratory studies (in vitro), clinical root canal system anatomy studies (in vivo) and clinical case reports of anomalies. Over 95% (95.9%) of maxillary first molars had three roots and 3.9% had two roots. The incidence of fusion of any two or three roots was approximately 5.2%. Conical and C-shaped roots and canals were rarely found (0.12%). This review contained the most data on the canal morphology of the mesiobuccal root with a total of 8399 teeth from 34 studies. The incidence of two canals in the mesiobuccal root was 56.8% and of one canal was 43.1% in a weighted average of all reported studies. The incidence of two canals in the mesiobuccal root was higher in laboratory studies (60.5%) compared to clinical studies (54.7%). Less variation was found in the distobuccal and palatal roots and the results were reported from fourteen studies consisting of 2576 teeth. One canal was found in the distobuccal root in 98.3% of teeth whereas the palatal root had one canal in over 99% of the teeth studied.",
"title": ""
},
{
"docid": "0caac54baab8117c8b25b04bd7460f48",
"text": "ÐThis paper presents a new variational framework for detecting and tracking multiple moving objects in image sequences. Motion detection is performed using a statistical framework for which the observed interframe difference density function is approximated using a mixture model. This model is composed of two components, namely, the static (background) and the mobile (moving objects) one. Both components are zero-mean and obey Laplacian or Gaussian law. This statistical framework is used to provide the motion detection boundaries. Additionally, the original frame is used to provide the moving object boundaries. Then, the detection and the tracking problem are addressed in a common framework that employs a geodesic active contour objective function. This function is minimized using a gradient descent method, where a flow deforms the initial curve towards the minimum of the objective function, under the influence of internal and external image dependent forces. Using the level set formulation scheme, complex curves can be detected and tracked while topological changes for the evolving curves are naturally managed. To reduce the computational cost required by a direct implementation of the level set formulation scheme, a new approach named Hermes is proposed. Hermes exploits aspects from the well-known front propagation algorithms (Narrow Band, Fast Marching) and compares favorably to them. Very promising experimental results are provided using real video sequences. Index TermsÐFront propagation, geodesic active contours, level set theory, motion detection, tracking.",
"title": ""
},
{
"docid": "5637bed8be75d7e79a2c2adb95d4c28e",
"text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.",
"title": ""
},
{
"docid": "1667c7e872bac649051bb45fc85e9921",
"text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.",
"title": ""
},
{
"docid": "12819e1ad6ca9b546e39ed286fe54d23",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "3f2aa3cde019d56240efba61d52592a4",
"text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.",
"title": ""
},
{
"docid": "612416cb82559f94d8d4b888bad17ba1",
"text": "Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply \"green\" sources and \"clean\" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.",
"title": ""
},
{
"docid": "61ffc67f0e242afd8979d944cbe2bff4",
"text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.",
"title": ""
},
{
"docid": "a0cba009ac41ab57bdea75c1676715a6",
"text": "These notes provide a brief introduction to the theory of noncooperative differential games. After the Introduction, Section 2 reviews the theory of static games. Different concepts of solution are discussed, including Pareto optima, Nash and Stackelberg equilibria, and the co-co (cooperative-competitive) solutions. Section 3 introduces the basic framework of differential games for two players. Open-loop solutions, where the controls implemented by the players depend only on time, are considered in Section 4. It is shown that Nash and Stackelberg solutions can be computed by solving a two-point boundary value problem for a system of ODEs, derived from the Pontryagin maximum principle. Section 5 deals with solutions in feedback form, where the controls are allowed to depend on time and also on the current state of the system. In this case, the search for Nash equilibrium solutions usually leads to a highly nonlinear system of HamiltonJacobi PDEs. In dimension higher than one, this system is generically not hyperbolic and the Cauchy problem is thus ill posed. Due to this instability, closed-loop solutions to differential games are mainly considered in the special case with linear dynamics and quadratic costs. In Section 6, a game in continuous time is approximated by a finite sequence of static games, by a time discretization. Depending of the type of solution adopted in each static game, one obtains different concept of solutions for the original differential game. Section 7 deals with differential games in infinite time horizon, with exponentially discounted payoffs. In this case, the search for Nash solutions in feedback form leads to a system of time-independent H-J equations. Section 8 contains a simple example of a game with infinitely many players. This is intended to convey a flavor of the newly emerging theory of mean field games. Modeling issues, and directions of current research, are briefly discussed in Section 9. Finally, the Appendix collects background material on multivalued functions, selections and fixed point theorems, optimal control theory, and hyperbolic PDEs.",
"title": ""
},
{
"docid": "c576c08aa746ea30a528e104932047a6",
"text": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.",
"title": ""
},
{
"docid": "946c6b2dc7bd102597bd96a0d4a4f46e",
"text": "Due to the non-stationarity nature and poor signal-to-noise ratio (SNR) of brain signals, repeated time-consuming calibration is one of the biggest problems for today's brain-computer interfaces (BCIs). In order to reduce calibration time, many transfer learning methods have been proposed to extract discriminative or stationary information from other subjects or prior sessions for target classification task. In this paper, we review the existing transfer learning methods used for BCI classification problems and organize them into three cases based on different transfer strategies. Besides, we list the datasets used in these BCI studies.",
"title": ""
},
{
"docid": "7b5f0c88eaf8c23b8e2489e140d0022f",
"text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.",
"title": ""
},
{
"docid": "cbbb2c0a9d2895c47c488bed46d8f468",
"text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"title": ""
},
{
"docid": "ba0726778e194159d916c70f5f4cedc9",
"text": "We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.",
"title": ""
}
] |
scidocsrr
|
d5a7fc54969981109e428edd33917bae
|
Vehicular cloud computing: A survey
|
[
{
"docid": "2171c57b911161d805ffc08fbe02f92a",
"text": "The past decade has witnessed a growing interest in vehicular networking and its vast array of potential applications. Increased wireless accessibility of the Internet from vehicles has triggered the emergence of vehicular safety applications, locationspecific applications, and multimedia applications. Recently, Professor Olariu and his coworkers have promoted the vision of Vehicular Clouds (VCs), a non-trivial extension, along several dimensions, of conventional Cloud Computing. In a VC, the under-utilized vehicular resources including computing power, storage and Internet connectivity can be shared between drivers or rented out over the Internet to various customers, very much as conventional cloud resources are. The goal of this chapter is to introduce and review the challenges and opportunities offered by what promises to be the Next Paradigm Shift:From Vehicular Networks to Vehicular Clouds. Specifically, the chapter introduces VCs and discusses some of their distinguishing characteristics and a number of fundamental research challenges. To illustrate the huge array of possible applications of the powerful VC concept, a number of possible application scenarios are presented and discussed. As the adoption and success of the vehicular cloud concept is inextricably related to security and privacy issues, a number of security and privacy issues specific to vehicular clouds are discussed as well. Additionally, data aggregation and empirical results are presented. Mobile Ad Hoc Networking: Cutting Edge Directions, Second Edition. Edited by Stefano Basagni, Marco Conti, Silvia Giordano, and Ivan Stojmenovic. © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.",
"title": ""
}
] |
[
{
"docid": "d8079ff945eb0bd85da940f168409d00",
"text": "Cuckoo search is a modern bio-inspired metaheuristic that has successfully been used to solve different real world optimization problems. In particular, it has exhibited rapid convergence reaching considerable good results. In this paper, we employ this technique to solve the set covering problem, which is a well-known optimization benchmark. We illustrate interesting experimental results where the proposed algorithm is able to obtain several global optimums for different set covering instances from the OR-Library.",
"title": ""
},
{
"docid": "2065faf3e72a8853dd6cbba1daf9c64a",
"text": "One of a good overview all the output neurons. The fixed point attractors have resulted in order to the attractor furthermore. As well as memory classification and all the basic ideas. Introducing the form of strange attractors or licence agreement may be fixed point! The above with input produces and the techniques brought from one of cognitive processes. The study of cpgs is the, global dynamics as nearest neighbor classifiers. Attractor networks encode knowledge of the, network will be ergodic so. These synapses will be applicable exploring one interesting and neural networks other technology professionals.",
"title": ""
},
{
"docid": "05bc787d000ecf26c8185b084f8d2498",
"text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling",
"title": ""
},
{
"docid": "a5d16384d928da7bcce7eeac45f59e2e",
"text": "Innovative rechargeable batteries that can effectively store renewable energy, such as solar and wind power, urgently need to be developed to reduce greenhouse gas emissions. All-solid-state batteries with inorganic solid electrolytes and electrodes are promising power sources for a wide range of applications because of their safety, long-cycle lives and versatile geometries. Rechargeable sodium batteries are more suitable than lithium-ion batteries, because they use abundant and ubiquitous sodium sources. Solid electrolytes are critical for realizing all-solid-state sodium batteries. Here we show that stabilization of a high-temperature phase by crystallization from the glassy state dramatically enhances the Na(+) ion conductivity. An ambient temperature conductivity of over 10(-4) S cm(-1) was obtained in a glass-ceramic electrolyte, in which a cubic Na(3)PS(4) crystal with superionic conductivity was first realized. All-solid-state sodium batteries, with a powder-compressed Na(3)PS(4) electrolyte, functioned as a rechargeable battery at room temperature.",
"title": ""
},
{
"docid": "1d72e3bbc8106a8f360c05bd0a638f0d",
"text": "Advancements in computer vision, natural language processing and deep learning techniques have resulted in the creation of intelligent systems that have achieved impressive results in the visually grounded tasks such as image captioning and visual question answering (VQA). VQA is a task that can be used to evaluate a system's capacity to understand an image. It requires an intelligent agent to answer a natural language question about an image. The agent must ground the question into the image and return a natural language answer. One of the latest techniques proposed to tackle this task is the attention mechanism. It allows the agent to focus on specific parts of the input in order to answer the question. In this paper we propose a novel long short-term memory (LSTM) architecture that uses dual attention to focus on specific question words and parts of the input image in order to generate the answer. We evaluate our solution on the recently proposed Visual 7W dataset and show that it performs better than state of the art. Additionally, we propose two new question types for this dataset in order to improve model evaluation. We also make a qualitative analysis of the results and show the strength and weakness of our agent.",
"title": ""
},
{
"docid": "7f0a721287ed05c67c5ecf1206bab4e6",
"text": "This study underlines the value of the brand personality and its influence on consumer’s decision making, through relational variables. An empirical study, in which 380 participants have received an SMS ad, confirms that brand personality does actually influence brand trust, brand attachment and brand commitment. The levels of brand sensitivity and involvement have also an impact on the brand personality and on its related variables.",
"title": ""
},
{
"docid": "192e1bd5baa067b563edb739c05decfa",
"text": "This paper presents a simple and accurate design methodology for LLC resonant converters, based on a semi- empirical approach to model steady-state operation in the \"be- low-resonance\" region. This model is framed in a design strategy that aims to design a converter capable of operating with soft-switching in the specified input voltage range with a load ranging from zero up to the maximum specified level.",
"title": ""
},
{
"docid": "3b5ef354f7ad216ca0bfcf893352bfce",
"text": "We offer the concept of multicommunicating to describe overlapping conversations, an increasingly common occurrence in the technology-enriched workplace. We define multicommunicating, distinguish it from other behaviors, and develop propositions for future research. Our work extends the literature on technology-stimulated restructuring and reveals one of the opportunities provided by lean media—specifically, an opportunity to multicommunicate. We conclude that the concept of multicommunicating has value both to the scholar and to the practicing manager.",
"title": ""
},
{
"docid": "3d335bfc7236ea3596083d8cae4f29e3",
"text": "OBJECTIVE\nTo summarise the applications and appropriate use of Dietary Reference Intakes (DRIs) as guidance for nutrition and health research professionals in the dietary assessment of groups and individuals.\n\n\nDESIGN\nKey points from the Institute of Medicine report, Dietary Reference Intakes: Applications in Dietary Assessment, are summarised in this paper. The different approaches for using DRIs to evaluate the intakes of groups vs. the intakes of individuals are highlighted.\n\n\nRESULTS\nEach of the new DRIs is defined and its role in the dietary assessment of groups and individuals is described. Two methods of group assessment and a new method for quantitative assessment of individuals are described. Illustrations are provided on appropriate use of the Estimated Average Requirement (EAR), the Adequate Intake (AI) and the Tolerable Upper Intake Level (UL) in dietary assessment.\n\n\nCONCLUSIONS\nDietary assessment of groups or individuals must be based on estimates of usual (long-term) intake. The EAR is the appropriate DRI to use in assessing groups and individuals. The AI is of limited value in assessing nutrient adequacy, and cannot be used to assess the prevalence of inadequacy. The UL is the appropriate DRI to use in assessing the proportion of a group at risk of adverse health effects. It is inappropriate to use the Recommended Dietary Allowance (RDA) or a group mean intake to assess the nutrient adequacy of groups.",
"title": ""
},
{
"docid": "433e7a8c4d4a16f562f9ae112102526e",
"text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.",
"title": ""
},
{
"docid": "c51e1b845d631e6d1b9328510ef41ea0",
"text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.",
"title": ""
},
{
"docid": "6831c633bf7359b8d22296b52a9a60b8",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
},
{
"docid": "d8583f5409aa230236ba1748bd9ef7b3",
"text": "Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and highdimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.",
"title": ""
},
{
"docid": "771834bc4bfe8231fe0158ec43948bae",
"text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "59308c5361d309568a94217c79cf0908",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read cryptography an introduction to computer security now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
},
{
"docid": "8b3557219674c8441e63e9b0ab459c29",
"text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.",
"title": ""
},
{
"docid": "686585ee0ab55dfeaa98efef5b496035",
"text": "This paper presents an embedded adaptive robust controller for trajectory tracking and stabilization of an omnidirectional mobile platform with parameter variations and uncertainties caused by friction and slip. Based on a dynamic model of the platform, the adaptive controller to achieve point stabilization, trajectory tracking, and path following is synthesized via the adaptive backstepping approach. This robust adaptive controller is then implemented into a high-performance field-programmable gate array chip using hardware/software codesign technique and system-on-a-programmable-chip design concept with a reusable user intellectual property core library. Furthermore, a soft-core processor and a real-time operating system are embedded into the same chip for realizing the control law to steer the mobile platform. Simulation results are conducted to show the effectiveness and merit of the proposed control method in comparison with a conventional proportional-integral feedback controller. The performance and applicability of the proposed embedded adaptive controller are exemplified by conducting several experiments on an autonomous omnidirectional mobile robot.",
"title": ""
},
{
"docid": "66a72238b7e9470eef9584c7018bb20e",
"text": "Enamel thickness of the maxillary permanent central incisors and canines in seven Finnish 47,XXX females, their first-degree male and female relatives, and control males and females from the general population were determined from radiographs. The results showed that enamel in the teeth of 47,XXX females was clearly thicker than that of normal controls. On the other hand, the thickness of “dentin” (distance between mesial and distal dentinoenamel junctions) in 47,XXX females' teeth was about the same as that in normal control females, but clearly reduced as compared with that in control males. It is therefore obvious that in the triple-X chromosome complement the extra X chromosome is active in amelogenesis, whereas it has practically no influence on the growth of dentin. The calculations based on present and previous results in 45,X females and 47,XYY males indicate that the X chromosome increases metric enamel growth somewhat more effectively than the Y chromosome. Possibly, halfway states exist between active and repressed enamel genes on the X chromosome. The Y chromosome seems to promote dental growth in a holistic fashion.",
"title": ""
}
] |
scidocsrr
|
04672b593dc0f356a1ef1e33aa86409f
|
Personalized search result diversification via structured learning
|
[
{
"docid": "27029a5e18e5d874606a87f0d238cd14",
"text": "User behavior provides many cues to improve the relevance of search results through personalization. One aspect of user behavior that provides especially strong signals for delivering better relevance is an individual's history of queries and clicked documents. Previous studies have explored how short-term behavior or long-term behavior can be predictive of relevance. Ours is the first study to assess how short-term (session) behavior and long-term (historic) behavior interact, and how each may be used in isolation or in combination to optimally contribute to gains in relevance through search personalization. Our key findings include: historic behavior provides substantial benefits at the start of a search session; short-term session behavior contributes the majority of gains in an extended search session; and the combination of session and historic behavior out-performs using either alone. We also characterize how the relative contribution of each model changes throughout the duration of a session. Our findings have implications for the design of search systems that leverage user behavior to personalize the search experience.",
"title": ""
}
] |
[
{
"docid": "894cfbb522a356bba407481bd051d834",
"text": "We propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.",
"title": ""
},
{
"docid": "4d56abf003caaa11e5bef74a14bd44e0",
"text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.",
"title": ""
},
{
"docid": "0cc16f8fe35cbf169de8263236d08166",
"text": "In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF (2) on sensor motes using small word size is not appropriate because XOR multiplication over GF (2) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF (2) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF (2) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF (2) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15% ∼ 19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve – a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over GF (2) on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF (2) can be faster than that over GF (p) on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF (p). TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM. 2 Seog Chung Seo et al.",
"title": ""
},
{
"docid": "f12c53ede3ef1cbab2641970aacbe16f",
"text": "Considerable advances have been achieved in estimating the depth map from a single image via convolutional neural networks (CNNs) during the past few years. Combining depth prediction from CNNs with conventional monocular simultaneous localization and mapping (SLAM) is promising for accurate and dense monocular reconstruction, in particular addressing the two long-standing challenges in conventional monocular SLAM: low map completeness and scale ambiguity. However, depth estimated by pretrained CNNs usually fails to achieve sufficient accuracy for environments of different types from the training data, which are common for certain applications such as obstacle avoidance of drones in unknown scenes. Additionally, inaccurate depth prediction of CNN could yield large tracking errors in monocular SLAM. In this paper, we present a real-time dense monocular SLAM system, which effectively fuses direct monocular SLAM with an online-adapted depth prediction network for achieving accurate depth prediction of scenes of different types from the training data and providing absolute scale information for tracking and mapping. Specifically, on one hand, tracking pose (i.e., translation and rotation) from direct SLAM is used for selecting a small set of highly effective and reliable training images, which acts as ground truth for tuning the depth prediction network on-the-fly toward better generalization ability for scenes of different types. A stage-wise Stochastic Gradient Descent algorithm with a selective update strategy is introduced for efficient convergence of the tuning process. On the other hand, the dense map produced by the adapted network is applied to address scale ambiguity of direct monocular SLAM which in turn improves the accuracy of both tracking and overall reconstruction. The system with assistance of both CPUs and GPUs, can achieve real-time performance with progressively improved reconstruction accuracy. Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that our method outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error.",
"title": ""
},
{
"docid": "4f511a669a510153aa233d90da4e406a",
"text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).",
"title": ""
},
{
"docid": "bfca88df9d719b1927e94b0beadb32bc",
"text": "This paper proposes a new intelligent fashion recommender system to select the most relevant garment design scheme for a specific consumer in order to deliver new personalized garment products. This system integrates emotional fashion themes and human perception on personalized body shapes and professional designers' knowledge. The corresponding perceptual data are systematically collected from professional using sensory evaluation techniques. The perceptual data of consumers and designers are formalized mathematically using fuzzy sets and fuzzy relations. The complex relation between human body measurements and basic sensory descriptors, provided by designers, is modeled using fuzzy decision trees. The fuzzy decision trees constitute an empirical model based on learning data measured and evaluated on a set of representative samples. The complex relation between basic sensory descriptors and fashion themes, given by consumers, is modeled using fuzzy cognitive maps. The combination of the two models can provide more complete information to the fashion recommender system, making it possible to evaluate if a specific body shape is relevant to a desired emotional fashion theme and which garment design scheme can improve the image of the body shape. The proposed system has been validated in a customized design and mass market selection through the evaluations of target consumers and fashion experts using a method frequently used in marketing study.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "0dfcbae479f0af59236a5213cb37983a",
"text": "The objective of this work is to detect the use of automated programs, known as game bots, based on social interactions in MMORPGs. Online games, especially MMORPGs, have become extremely popular among internet users in the recent years. Not only the popularity but also security threats such as the use of game bots and identity theft have grown manifold. As bot players can obtain unjustified assets without corresponding efforts, the gaming community does not allow players to use game bots. However, the task of identifying game bots is not an easy one because of the velocity and variety of their evolution in mimicking human behavior. Existing methods for detecting game bots have a few drawbacks like reducing immersion of players, low detection accuracy rate, and collision with other security programs. We propose a novel method for detecting game bots based on the fact that humans and game bots tend to form their social network in contrasting ways. In this work we focus particularly on the in game mentoring network from amongst several social networks. We construct a couple of new features based on eigenvector centrality to capture this intuition and establish their importance for detecting game bots. The results show a significant increase in the classification accuracy of various classifiers with the introduction of these features.",
"title": ""
},
{
"docid": "a8614b86b55411d43d5cc863fcf8ca9c",
"text": "This paper introduces a survey of different maximum peak power tracking (MPPT) techniques used in the implementation of photovoltaic power systems. It will discuss different 30 techniques used in tracking maximum power in photovoltaic arrays. This paper can be considered as a completion, updating, and declaration of the good efforts made in [3], that discussed 19 MPPT techniques in PV systems, while summarizes additional 11 MPPT methods.",
"title": ""
},
{
"docid": "d4345ee2baaa016fc38ba160e741b8ee",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "33f53ba19c1198fc2342960c57dd22f8",
"text": "This paper reports on a facile and low cost method to fabricate highly stretchable potentiometric pH sensor arrays for biomedical and wearable applications. The technique uses laser carbonization of a thermoset polymer followed by transfer and embedment of carbonized nanomaterial onto an elastomeric matrix. The process combines selective laser pyrolization/carbonization with meander interconnect methodology to fabricate stretchable conductive composites with which pH sensors can be realized. The stretchable pH sensors display a sensitivity of -51 mV/pH over the clinically-relevant range of pH 4-10. The sensors remain stable for strains of up to 50 %.",
"title": ""
},
{
"docid": "7eb4e5b88843d81390c14aae2a90c30b",
"text": "A low-power, high-speed, but with a large input dynamic range and output swing class-AB output buffer circuit, which is suitable for the flat-panel display application, is proposed. The circuit employs an elegant comparator to sense the transients of the input to turn on charging/discharging transistors, thus draws little current during static, but has an improved driving capability during transients. It is demonstrated in a 0.6 m CMOS technology.",
"title": ""
},
{
"docid": "a2799e0cee6ca6d7f6b0cc230957b56b",
"text": "We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.",
"title": ""
},
{
"docid": "b2246b58bb9fb6c6ff58115e25da49dc",
"text": "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by Gorelick et al. (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video",
"title": ""
},
{
"docid": "c9acadfba9aa66ef6e7f4bc1d86943f6",
"text": "We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.",
"title": ""
},
{
"docid": "4f57590f8bbf00d35b86aaa1ff476fc0",
"text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.",
"title": ""
},
{
"docid": "1c3d933680ed75a1e228f5170dae8847",
"text": "Visualization is a critical component of neuroimaging, and how to best view data that is naturally three dimensional is a long standing question in neuroscience. Many approaches, programs, and techniques have been developed specifically for neuroimaging. However, exploration of 3D information through a 2D screen is inherently limited. Many neuroscientific researchers hope that with the recent commercialization and popularization of VR, it can offer the next-step in data visualization and exploration. Neuro Imaging in Virtual Reality (NIVR), is a visualization suite that employs various immersive visualizations to represent neuroimaging information in VR. Some established techniques, such as raymarching volume visualization, are paired with newer techniques, such as near-field rendering, to provide a broad basis of how we can leverage VR to improve visualization and navigation of neuroimaging data. Several of the neuroscientific visualization approaches presented are, to our knowledge, the first of their kind. NIVR offers not only an exploration of neuroscientific data visualization, but also a tool to expose and educate the public regarding recent advancements in the field of neuroimaging. By providing an engaging experience to explore new techniques and discoveries in neuroimaging, we hope to spark scientific interest through a broad audience. Furthermore, neuroimaging offers deep and expansive datasets; a single scan can involve several gigabytes of information. Visualization and exploration of this type of information can be challenging, and real-time exploration of this information in VR even more so. NIVR explores pathways which make this possible, and offers preliminary stereo visualizations of these types of massive data.",
"title": ""
},
{
"docid": "abba5d320a4b6bf2a90ba2b836019660",
"text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.",
"title": ""
},
{
"docid": "ef2cee9972d6d0b84736ff7a0da8995c",
"text": "The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.",
"title": ""
},
{
"docid": "cdf0d800c122ff8a64d8fca7386cbfd8",
"text": "Digital wireless communication applications such as UWB and WPAN necessitate low-power high-speed ADCs to convert RF/IF signals into digital form for subsequent baseband processing. Considering latency and conversion speed, flash ADCs are often the most preferred option. Generally, flash ADCs suffer from high power consumption and large area overhead. On the contrary, SAR ADCs have low power dissipation and occupy a small area. However, a SAR ADC needs several comparison cycles to complete one conversion, which limits its conversion speed. The highest single-channel operation speed of previously reported SAR ADCs is 625MS/s [1]. The ADC in [1] utilizes a 2b/step structure. For non-multi-bit/step SAR ADCs, the highest reported conversion rate is 300MS/s [2]. The structure of a comparator-based binary-search ADC is between that of flash and SAR ADCs [3]. Compared to a flash ADC (high speed, high power) and a SAR ADC (low speed, low power), a binary-search ADC achieves balance between operation speed and power consumption. This paper reports a 5b asynchronous binary-search ADC with reference-range prediction. The maximum conversion speed of this ADC is 800MS/s at a cost of 2mW power consumption.",
"title": ""
}
] |
scidocsrr
|
062c6213d3088cc649d46eaaf7098182
|
Prevalence of rheumatic heart disease in children and young adults in Nicaragua.
|
[
{
"docid": "9482ff4d9bfda5dceabac593106c6442",
"text": "BACKGROUND\nEpidemiologic studies of the prevalence of rheumatic heart disease have used clinical screening with echocardiographic confirmation of suspected cases. We hypothesized that echocardiographic screening of all surveyed children would show a significantly higher prevalence of rheumatic heart disease.\n\n\nMETHODS\nRandomly selected schoolchildren from 6 through 17 years of age in Cambodia and Mozambique were screened for rheumatic heart disease according to standard clinical and echocardiographic criteria.\n\n\nRESULTS\nClinical examination detected rheumatic heart disease that was confirmed by echocardiography in 8 of 3677 children in Cambodia and 5 of 2170 children in Mozambique; the corresponding prevalence rates and 95% confidence intervals (CIs) were 2.2 cases per 1000 (95% CI, 0.7 to 3.7) for Cambodia and 2.3 cases per 1000 (95% CI, 0.3 to 4.3) for Mozambique. In contrast, echocardiographic screening detected 79 cases of rheumatic heart disease in Cambodia and 66 cases in Mozambique, corresponding to prevalence rates of 21.5 cases per 1000 (95% CI, 16.8 to 26.2) and 30.4 cases per 1000 (95% CI, 23.2 to 37.6), respectively. The mitral valve was involved in the great majority of cases (87.3% in Cambodia and 98.4% in Mozambique).\n\n\nCONCLUSIONS\nSystematic screening with echocardiography, as compared with clinical screening, reveals a much higher prevalence of rheumatic heart disease (approximately 10 times as great). Since rheumatic heart disease frequently has devastating clinical consequences and secondary prevention may be effective after accurate identification of early cases, these results have important public health implications.",
"title": ""
}
] |
[
{
"docid": "081da5941b0431d00b4058c26987d43f",
"text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c388c22f5d97fc172187ba1fd352cef0",
"text": "Analysis of a driver's head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a driver's focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.",
"title": ""
},
{
"docid": "e945b0e23ad090cd76b920e073d26116",
"text": "Despite the success of proxy caching in the Web, proxy servers have not been used effectively for caching of Internet multimedia streams such as audio and video. Explosive growth in demand for web-based streaming applications justifies the need for caching popular streams at a proxy server close to the interested clients. Because of the need for congestion control in the Internet, multimedia streams should be quality adaptive. This implies that on a cache-hit, a proxy must replay a variable-quality cached stream whose quality is determined by the bandwidth of the first session. This paper addresses the implications of congestion control and quality adaptation on proxy caching mechanisms. We present a fine-grain replacement algorithm for layered-encoded multimedia streams at Internet proxy servers, and describe a pre-fetching scheme to smooth out the variations in quality of a cached stream during subsequent playbacks. This enables the proxy to perform quality adaptation more effectively and maximizes the delivered quality. We also extend the semantics of popularity and introduce the idea of weighted hit to capture both the level of interest and the usefulness of a layer for a cached stream. Finally, we present our replacement algorithm and show that its interaction with prefetching results in the state of the cache converging to the optimal state such that the quality of a cached stream is proportional to its popularity, and the variations in quality of a cached stream are inversely proportional to its popularity. This implies that after serving several requests for a stream, the proxy can effectively hide low bandwidth paths to the original server from interested clients.",
"title": ""
},
{
"docid": "d20444fe087877871e11be6cac335b94",
"text": "Seamless handover over multiple access points is highly desirable to mobile nodes, but ensuring security and efficiency of this process is challenging. This paper shows that prior handover authentication schemes incur high communication and computation costs, and are subject to a few security attacks. Further, a novel handover authentication protocol named PairHand is proposed. PairHand uses pairing-based cryptography to secure handover process and to achieve high efficiency. Also, an efficient batch signature verification scheme is incorporated into PairHand. Experiments using our implementation on laptop PCs show that PairHand is feasible in real applications.",
"title": ""
},
{
"docid": "23ceda789c34807a577ad683fdaaac38",
"text": "This paper describes a generalisation of the unscented transformation (UT) which allows sigma points to be scaled to an arbitrary dimension. The UT is a method for predicting means and covariances in nonlinear systems. A set of samples are deterministically chosen which match the mean and covariance of a (not necessarily Gaussian-distributed) probability distribution. These samples can be scaled by an arbitrary constant. The method guarantees that the mean and covariance second order accuracy in mean and covariance, giving the same performance as a second order truncated filter but without the need to calculate any Jacobians or Hessians. The impacts of scaling issues are illustrated by considering conversions from polar to Cartesian coordinates with large angular uncertainties.",
"title": ""
},
{
"docid": "964f4f8c14432153d6001d961a1b5294",
"text": "Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.",
"title": ""
},
{
"docid": "bad5040a740421b3079c3fa7bf598d71",
"text": "Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multipath, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.",
"title": ""
},
{
"docid": "68aa2b0b429bb2a3acf170fbbecfc6d8",
"text": "Performance prediction models at the source code level are crucial components in advanced optimizing compilers, programming environments, and tools for performance debugging. Compilers and programming environments use performance models to guide the selection of effective code improvement strategies. Tools for performance debugging may use performance prediction models to explain the performance behavior of a program to the user. Finding the best match between a performance prediction model and a specific source–level optimization task or performance explanation task is a challenging problem. The best performance prediction model for a given task is a model that satisfies the precision requirements while including as few performance factors as possible in order to minimize the cost of the performance predictions. In optimizing compilers, the lack of such a cost–effective performance model may make the application of an optimization prohibitively expensive. In the context of a programming environment, marginal performance factors should be avoided since they will obscure reasoning about the observed performance behavior. This paper discusses a new qualitative performance prediction framework at the program source level that automatically selects a minimal set of performance factors for a target system and performance precision requirement. In the context of this paper, a target system consists of a compiler, an operating system, and a machine architecture. The performance prediction framework identifies significant target system and application program parameters that have to be considered in order to achieve the requested precision. Such parameters may include application factors such as number and type of floating point operations, and machine characteristics such as L1 and L2 caches, TLB, and main memory. The reported performance factors can be used by a compiler writer to build or validate a quantitative performance model, and by a user to better understand the observed program performance. In addition, the failure of the framework to produce a model of the desired quality may be an indication that there exists a significant performance factor not considered within the performance framework. Such information is important to guiding a compiler writer or user in a more efficient search for crucial performance factors. Preliminary experimental results for a small computation kernel and a set of twelve target systems indicate the effectiveness of our framework. The target systems for the experiment consisted of four machine architectures (SuperSPARC I-II and UltraSPARC I-II running Solaris 2.5) and three compiler optimization levels (-none, -O3, -depend -fast). Our prototype framework determines different performance models (1) across different precision requirements for the same target ∗e-mail: chunghsu@cs.rutgers.edu, uli@cs.rutgers.edu; address: Department of Computer Science, Hill Center, Busch Campus, Rutgers University, Piscataway, NJ 08855",
"title": ""
},
{
"docid": "670e3f4fdb4a66de74ae740ae19aa260",
"text": "The adsorption and desorption of D2O on hydrophobic activated carbon fiber (ACF) occurs at a smaller pressure than the adsorption and desorption of H2O. The behavior of the critical desorption pressure difference between D2O and H2O in the pressure range of 1.25-1.80kPa is applied to separate low concentrated D2O from water using the hydrophobic ACF, because the desorption branches of D2O and H2O drop almost vertically. The deuterium concentration of all desorbed water in the above pressure range is lower than that of water without adsorption-treatment on ACF. The single adsorption-desorption procedure on ACF at 1.66kPa corresponding to the maximum difference of adsorption amount between D2O and H2O reduced the deuterium concentration of desorbed water to 130.6ppm from 143.0ppm. Thus, the adsorption-desorption procedure of water on ACF is a promising separation and concentration method of low concentrated D2O from water.",
"title": ""
},
{
"docid": "45c680911d97163839dda69d374399b7",
"text": "The process of identifying radio transmitters by examining their unique transient characteristics at the beginning of transmission is called RF fingerprinting. The security of wireless networks can be enhanced by challenging a user to prove its identity if the fingerprint of a network device is unidentified or deemed to be a threat. This paper addresses the problem of identifying an individual node in a wireless network by means of its RF fingerprint. A complete identification system is presented, including data acquisition, transient detection, RF fingerprint extraction, and classification subsystems. The classification performance of the proposed system has been evaluated from experimental data. It is demonstrated that the RF fingerprinting technique can be used as an additional tool to enhance the security of wireless networks.",
"title": ""
},
{
"docid": "03bb19dd7a027d3f84e95f87c3e5de8f",
"text": "The pick and roll is a powerful tool; as former coach Stan Van Gundy once said of his Magic team, \"[The pick and roll is] what we're going to be in when the game's on the line. [...] I don't care how good you are, you can't take away everything\" [1]. In today's perimeter oriented NBA, the pick and roll is more important than ever before. The player tracking data that is now being collected across all arenas in the NBA holds out the promise of deepening our understanding of offensive strategies. In this paper we approach part of that problem by introducing a pattern recognition framework for identifying onball screens. We use a machine learning classifier on top of a rule-based algorithm to recognize on-ball screens. Tested on 21 quarters from 14 NBA games from last season our algorithm achieved a sensitivity of 82% and positive predictive value of 80%",
"title": ""
},
{
"docid": "04b8ce1504efb5ecb4487184b4988f58",
"text": "Lifelong learning is the problem of learning multiple consecutive tasks in an online manner and is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on learning a lifelong approach to generative modeling whereby we continuously incorporate newly observed distributions into our model representation. We utilize two models, aptly named the student and the teacher, in order to aggregate information about all past distributions without the preservation of any of the past data or previous models. The teacher is utilized as a form of compressed memory in order to allow for the student model to learn over the past as well as present data. We demonstrate why a naive approach to lifelong generative modeling fails and introduce a regularizer with which we demonstrate learning across a long range of distributions.",
"title": ""
},
{
"docid": "333c8a22b502b771c9f5f0df67d6da1c",
"text": "Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.",
"title": ""
},
{
"docid": "efa4f154549c81a31421d32ad44267b9",
"text": "PURPOSE OF REVIEW\nDespite the American public following recommendations to decrease absolute dietary fat intake and specifically decrease saturated fat intake, we have seen a dramatic rise over the past 40 years in the rates of non-communicable diseases associated with obesity and overweight, namely cardiovascular disease. The development of the diet-heart hypothesis in the mid twentieth century led to faulty but long-held beliefs that dietary intake of saturated fat led to heart disease. Saturated fat can lead to increased LDL cholesterol levels, and elevated plasma cholesterol levels have been shown to be a risk factor for cardiovascular disease; however, the correlative nature of their association does not assign causation.\n\n\nRECENT FINDINGS\nAdvances in understanding the role of various lipoprotein particles and their atherogenic risk have been helpful for understanding how different dietary components may impact CVD risk. Numerous meta-analyses and systematic reviews of both the historical and current literature reveals that the diet-heart hypothesis was not, and still is not, supported by the evidence. There appears to be no consistent benefit to all-cause or CVD mortality from the reduction of dietary saturated fat. Further, saturated fat has been shown in some cases to have an inverse relationship with obesity-related type 2 diabetes. Rather than focus on a single nutrient, the overall diet quality and elimination of processed foods, including simple carbohydrates, would likely do more to improve CVD and overall health. It is in the best interest of the American public to clarify dietary guidelines to recognize that dietary saturated fat is not the villain we once thought it was.",
"title": ""
},
{
"docid": "6fca3aabf3812746a98bb7d5fb758a22",
"text": "The emergence and global spread of the 2009 pandemic H1N1 influenza virus reminds us that we are limited in the strategies available to control influenza infection. Vaccines are the best option for the prophylaxis and control of a pandemic; however, the lag time between virus identification and vaccine distribution exceeds 6 months and concerns regarding vaccine safety are a growing issue leading to vaccination refusal. In the short-term, antiviral therapy is vital to control the spread of influenza. However, we are currently limited to four licensed anti-influenza drugs: the neuraminidase inhibitors oseltamivir and zanamivir, and the M2 ion-channel inhibitors amantadine and rimantadine. The value of neuraminidase inhibitors was clearly established during the initial phases of the 2009 pandemic when vaccines were not available, i.e. stockpiles of antivirals are valuable. Unfortunately, as drug-resistant variants continue to emerge naturally and through selective pressure applied by use of antiviral drugs, the efficacy of these drugs declines. Because we cannot predict the strain of influenza virus that will cause the next epidemic or pandemic, it is important that we develop novel anti-influenza drugs with broad reactivity against all strains and subtypes, and consider moving to multiple drug therapy in the future. In this article we review the experimental data on investigational antiviral agents undergoing clinical trials (parenteral zanamivir and peramivir, long-acting neuraminidase inhibitors and the polymerase inhibitor favipiravir [T-705]) and experimental antiviral agents that target either the virus (the haemagglutinin inhibitor cyanovirin-N and thiazolides) or the host (fusion protein inhibitors [DAS181], cyclo-oxygenase-2 inhibitors and peroxisome proliferator-activated receptor agonists).",
"title": ""
},
{
"docid": "8669f1a511fab8d6a18b9905d6c6b630",
"text": "Consumer behavior study is a new, interdisciplinary nd emerging science, developed in the 1960s. Its main sources of information come from ec onomics, psychology, sociology, anthropology and artificial intelligence. If a cent ury ago, most people were living in small towns, with limited possibilities to leave their co mmunity, and few ways to satisfy their needs, now, due to the accelerated evolution of technology and the radical change of life style, consumers begin to have increasingly diverse needs. At the same time the instruments used to study their behavior have evolved, and today databa ses are included in consumer behavior research. Throughout time many models were develope d, first in order to analyze, and later in order to predict the consumer behavior. As a res ult, the concept of Big Data developed, and by applying it now, companies are trying to und erstand and predict the behavior of their consumers.",
"title": ""
},
{
"docid": "30abd57cf5c7f0a86f5d71eb2cc32af2",
"text": "We apply a recently proposed technique – Multi-task Multi-Kernel Learning (MTMKL) – to the problem of modeling students’ wellbeing. Because wellbeing is a complex internal state consisting of several related dimensions, Multi-task learning can be used to classify them simultaneously. Multiple Kernel Learning is used to efficiently combine data from multiple modalities. MTMKL combines these approaches using an optimization function similar to a support vector machine (SVM). We show that MTMKL successfully classifies five dimensions of wellbeing, and provides performance benefits above both SVM and MKL.",
"title": ""
},
{
"docid": "e500cd3df03ff2d01d27bc012e332b3a",
"text": "Received Nov 13,2012 Revised Jan 05, 2013 Accepted Jan 12,2013 In this paper, we have proposed a framework to count the moving person in the video automatically in a very dense crowd situation. Median filter is used to segment the foreground from the background and blob analysis is done to count the people in the current frame. Optimization of different parameters is done by using genetic algorithm. This framework is used to count the people in the video recorded in the mattaf area where different crowd densities can be observed. An overall people counting accuracy of more than 96% is obtained. Keyword:",
"title": ""
},
{
"docid": "88fa70ef8c6dfdef7d1c154438ff53c2",
"text": "There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"title": ""
},
{
"docid": "9f3e9e7c493b3b62c7ec257a00f43c20",
"text": "The wind stroke is a common syndrome in clinical disease; the physicians of past generations accumulated much experience in long-term clinical practice and left abundant literature. Looking from this literature, the physicians of past generations had different cognitions of the wind stroke, especially the concept of wind stroke. The connotation of wind stroke differed at different stages, going through a gradually changing process from exogenous disease, true wind stroke, apoplectic wind stroke to cerebral apoplexy.",
"title": ""
}
] |
scidocsrr
|
a7fc18f0da384278cf915392224b193f
|
Automatic detection of fracture in femur bones using image processing
|
[
{
"docid": "69ccc6fa6c1d9dd0a7b5206b70d33359",
"text": "In medical applications, sensitivity in detecting medical problems and accuracy of detection are often in conflict. A single classifier usually cannot achieve both high sensitivity and accuracy at the same time. Methods of combining classifiers have been proposed in the literature. This paper presents a study of probabilistic combination methods applied to the detection of bone fractures in X-ray images. Test results show that the effectiveness of a method in improving both accuracy and sensitivity depends on the nature of the method as well as the proportion of positive samples.",
"title": ""
}
] |
[
{
"docid": "57f388a028f3bcb7a4a309d445ee695c",
"text": "Exploring the whole sequence of steps a student takes to produce work, and the patterns that emerge from thousands of such sequences is fertile ground for a richer understanding of learning. In this paper we autonomously generate hints for the Code.org `Hour of Code,' (which is to the best of our knowledge the largest online course to date) using historical student data. We first develop a family of algorithms that can predict the way an expert teacher would encourage a student to make forward progress. Such predictions can form the basis for effective hint generation systems. The algorithms are more accurate than current state-of-the-art methods at recreating expert suggestions, are easy to implement and scale well. We then show that the same framework which motivated the hint generating algorithms suggests a sequence-based statistic that can be measured for each learner. We discover that this statistic is highly predictive of a student's future success.",
"title": ""
},
{
"docid": "5c86ff18054344fe8c8b1911bbb56997",
"text": "Nearest neighbor search methods based on hashing have attracted considerable attention for effective and efficient large-scale similarity search in computer vision and information retrieval community. In this paper, we study the problems of learning hash functions in the context of multimodal data for cross-view similarity search. We put forward a novel hashing method, which is referred to Collective Matrix Factorization Hashing (CMFH). CMFH learns unified hash codes by collective matrix factorization with latent factor model from different modalities of one instance, which can not only supports cross-view search but also increases the search accuracy by merging multiple view information sources. We also prove that CMFH, a similarity-preserving hashing learning method, has upper and lower boundaries. Extensive experiments verify that CMFH significantly outperforms several state-of-the-art methods on three different datasets.",
"title": ""
},
{
"docid": "042fcc75e4541d27b97e8c2fe02a2ddf",
"text": "Folk medicine suggests that pomegranate (peels, seeds and leaves) has anti-inflammatory properties; however, the precise mechanisms by which this plant affects the inflammatory process remain unclear. Herein, we analyzed the anti-inflammatory properties of a hydroalcoholic extract prepared from pomegranate leaves using a rat model of lipopolysaccharide-induced acute peritonitis. Male Wistar rats were treated with either the hydroalcoholic extract, sodium diclofenac, or saline, and 1 h later received an intraperitoneal injection of lipopolysaccharides. Saline-injected animals (i. p.) were used as controls. Animals were culled 4 h after peritonitis induction, and peritoneal lavage and peripheral blood samples were collected. Serum and peritoneal lavage levels of TNF-α as well as TNF-α mRNA expression in peritoneal lavage leukocytes were quantified. Total and differential leukocyte populations were analyzed in peritoneal lavage samples. Lipopolysaccharide-induced increases of both TNF-α mRNA and protein levels were diminished by treatment with either pomegranate leaf hydroalcoholic extract (57 % and 48 % mean reduction, respectively) or sodium diclofenac (41 % and 33 % reduction, respectively). Additionally, the numbers of peritoneal leukocytes, especially neutrophils, were markedly reduced in hydroalcoholic extract-treated rats with acute peritonitis. These results demonstrate that pomegranate leaf extract may be used as an anti-inflammatory drug which suppresses the levels of TNF-α in acute inflammation.",
"title": ""
},
{
"docid": "7380419cc9c5eac99e8d46e73df78285",
"text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.",
"title": ""
},
{
"docid": "206dc1a4a27b603360888d414e0b5cf6",
"text": "Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning-termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.",
"title": ""
},
{
"docid": "91ad02ab816f7897f86916e9c9106ef4",
"text": "Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.",
"title": ""
},
{
"docid": "4a7ed4868ff279b4d83f969076fb91e9",
"text": "Information theoretic measures form a fundamental class of measures for comparing clusterings, and have recently received increasing interest. Neverthel ss, a number of questions concerning their properties and inter-relationships remain unresolv ed. In this paper, we perform an organized study of information theoretic measures for clustering com parison, including several existing popular measures in the literature, as well as some newly propos ed nes. We discuss and prove their important properties, such as the metric property and the no rmalization property. We then highlight to the clustering community the importance of correct ing information theoretic measures for chance, especially when the data size is small compared to th e number of clusters present therein. Of the available information theoretic based measures, we a dvocate the normalized information distance (NID) as a general measure of choice, for it possess e concurrently several important properties, such as being both a metric and a normalized meas ure, admitting an exact analytical adjusted-for-chance form, and using the nominal [0,1] range better than other normalized variants.",
"title": ""
},
{
"docid": "b8819099c285b531de22ddc03971f130",
"text": "About 14% of the global burden of disease has been attributed to neuropsychiatric disorders, mostly due to the chronically disabling nature of depression and other common mental disorders, alcohol-use and substance-use disorders, and psychoses. Such estimates have drawn attention to the importance of mental disorders for public health. However, because they stress the separate contributions of mental and physical disorders to disability and mortality, they might have entrenched the alienation of mental health from mainstream efforts to improve health and reduce poverty. The burden of mental disorders is likely to have been underestimated because of inadequate appreciation of the connectedness between mental illness and other health conditions. Because these interactions are protean, there can be no health without mental health. Mental disorders increase risk for communicable and non-communicable diseases, and contribute to unintentional and intentional injury. Conversely, many health conditions increase the risk for mental disorder, and comorbidity complicates help-seeking, diagnosis, and treatment, and influences prognosis. Health services are not provided equitably to people with mental disorders, and the quality of care for both mental and physical health conditions for these people could be improved. We need to develop and evaluate psychosocial interventions that can be integrated into management of communicable and non-communicable diseases. Health-care systems should be strengthened to improve delivery of mental health care, by focusing on existing programmes and activities, such as those which address the prevention and treatment of HIV, tuberculosis, and malaria; gender-based violence; antenatal care; integrated management of childhood illnesses and child nutrition; and innovative management of chronic disease. An explicit mental health budget might need to be allocated for such activities. Mental health affects progress towards the achievement of several Millennium Development Goals, such as promotion of gender equality and empowerment of women, reduction of child mortality, improvement of maternal health, and reversal of the spread of HIV/AIDS. Mental health awareness needs to be integrated into all aspects of health and social policy, health-system planning, and delivery of primary and secondary general health care.",
"title": ""
},
{
"docid": "a33229fd0a9cd2daa423b2d8b102862c",
"text": "OBJECTIVES\nTo identify prognostic factors in patients with metastatic pancreatic adenocarcinoma.\n\n\nMETHODS\nThe relationship between patient characteristics and outcome was examined by multivariate regression analyses of data from 409 consecutive patients with metastatic pancreatic adenocarcinoma who had been treated with a gemcitabine-containing regimen, and we stratified the patients into 3 risk groups according to the number of prognostic factors they had for a poor outcome. A validation data set obtained from 145 patients who had been treated with agents other than gemcitabine was analyzed. The prognostic index was applied the each of the patients.\n\n\nRESULTS\nThe multivariate regression analyses revealed that the presence of pain, peritoneal dissemination, liver metastasis, and an elevated serum C-reactive protein value significantly contributed to a shorter survival time. The patients were stratified into 3 groups according to their number of risk factors, and their outcomes of the 3 groups were significantly different. When the prognostic index was applied to the validation data set, the respective outcomes of the 3 groups were found to be significantly differed from each other.\n\n\nCONCLUSIONS\nPain, peritoneal dissemination, liver metastasis, and an elevated serum C-reactive protein value are important prognostic factors for patients with metastatic pancreatic adenocarcinoma.",
"title": ""
},
{
"docid": "85f5833628a4b50084fa50cbe45ebe4d",
"text": "We introduce a functional gradient descent trajectory optimization algorithm for robot motion planning in Reproducing Kernel Hilbert Spaces (RKHSs). Functional gradient algorithms are a popular choice for motion planning in complex many-degree-of-freedom robots, since they (in theory) work by directly optimizing within a space of continuous trajectories to avoid obstacles while maintaining geometric properties such as smoothness. However, in practice, implementations such as CHOMP and TrajOpt typically commit to a fixed, finite parametrization of trajectories, often as a sequence of waypoints. Such a parameterization can lose much of the benefit of reasoning in a continuous trajectory space: e.g., it can require taking an inconveniently small step size and large number of iterations to maintain smoothness. Our work generalizes functional gradient trajectory optimization by formulating it as minimization of a cost functional in an RKHS. This generalization lets us represent trajectories as linear combinations of kernel functions. As a result, we are able to take larger steps and achieve a locally optimal trajectory in just a few iterations. Depending on the selection of kernel, we can directly optimize in spaces of trajectories that are inherently smooth in velocity, jerk, curvature, etc., and that have a low-dimensional, adaptively chosen parameterization. Our experiments illustrate the effectiveness of the planner for different kernels, including Gaussian RBFs with independent and coupled interactions among robot joints, Laplacian RBFs, and B-splines, as compared to the standard discretized waypoint representation.",
"title": ""
},
{
"docid": "9cedc3f1a04fa51fb8ce1cf0cf01fbc3",
"text": "OBJECTIVES:The objective of this study was to provide updated explicit and relevant consensus statements for clinicians to refer to when managing hospitalized adult patients with acute severe ulcerative colitis (UC).METHODS:The Canadian Association of Gastroenterology consensus group of 23 voting participants developed a series of recommendation statements that addressed pertinent clinical questions. An iterative voting and feedback process was used to do this in conjunction with systematic literature reviews. These statements were brought to a formal consensus meeting held in Toronto, Ontario (March 2010), when each statement was discussed, reformulated, voted upon, and subsequently revised until group consensus (at least 80% agreement) was obtained. The modified GRADE (Grading of Recommendations Assessment, Development, and Evaluation) criteria were used to rate the strength of recommendations and the quality of evidence.RESULTS:As a result of the iterative process, consensus was reached on 21 statements addressing four themes (General considerations and nutritional issues, Steroid use and predictors of steroid failure, Cyclosporine and infliximab, and Surgical issues).CONCLUSIONS:Key recommendations for the treatment of hospitalized patients with severe UC include early escalation to second-line medical therapy with either infliximab or cyclosporine in individuals in whom parenteral steroids have failed after 72 h. These agents should be used in experienced centers where appropriate support is available. Sequential therapy with cyclosporine and infliximab is not recommended. Surgery is an option when first-line steroid therapy fails, and is indicated when second-line medical therapy fails and/or when complications arise during the hospitalization.",
"title": ""
},
{
"docid": "f64896f0eaf5becb7d9099c327bd6a59",
"text": "Device-free gesture tracking is an enabling HCI mechanism for small wearable devices because fingers are too big to control the GUI elements on such small screens, and it is also an important HCI mechanism for medium-to-large size mobile devices because it allows users to provide input without blocking screen view. In this paper, we propose LLAP, a device-free gesture tracking scheme that can be deployed on existing mobile devices as software, without any hardware modification. We use speakers and microphones that already exist on most mobile devices to perform device-free tracking of a hand/finger. The key idea is to use acoustic phase to get fine-grained movement direction and movement distance measurements. LLAP first extracts the sound signal reflected by the moving hand/finger after removing the background sound signals that are relatively consistent over time. LLAP then measures the phase changes of the sound signals caused by hand/finger movements and then converts the phase changes into the distance of the movement. We implemented and evaluated LLAP using commercial-off-the-shelf mobile phones. For 1-D hand movement and 2-D drawing in the air, LLAP has a tracking accuracy of 3.5 mm and 4.6 mm, respectively. Using gesture traces tracked by LLAP, we can recognize the characters and short words drawn in the air with an accuracy of 92.3% and 91.2%, respectively.",
"title": ""
},
{
"docid": "673e1ec63a0e84cf3fbf450928d89905",
"text": "This study proposed an IoT (Internet of Things) system for the monitoring and control of the aquaculture platform. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. This study permits real-time observation and control of aquaculture platform with dissolved oxygen sensors, temperature sensing elements using A/D and microcontrollers signal conversion. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. This study is to make the best fusion value of multi-odometer measurement data for optimization via the maximum likelihood estimation (MLE).Finally, this paper have good efficient and precise computing in the experimental results.",
"title": ""
},
{
"docid": "2e2a21ca1be2da2d30b1b2a92cd49628",
"text": "A new form of cloud computing, serverless computing, is drawing attention as a new way to design micro-services architectures. In a serverless computing environment, services are developed as service functional units. The function development environment of all serverless computing framework at present is CPU based. In this paper, we propose a GPU-supported serverless computing framework that can deploy services faster than existing serverless computing framework using CPU. Our core approach is to integrate the open source serverless computing framework with NVIDIA-Docker and deploy services based on the GPU support container. We have developed an API that connects the open source framework to the NVIDIA-Docker and commands that enable GPU programming. In our experiments, we measured the performance of the framework in various environments. As a result, developers who want to develop services through the framework can deploy high-performance micro services and developers who want to run deep learning programs without a GPU environment can run code on remote GPUs with little performance degradation.",
"title": ""
},
{
"docid": "356f625738d759337007b386940367a4",
"text": "Guiding a student through a sequence of lessons and helping them retain knowledge is one of the central challenges in education. Online learning platforms like Khan Academy and Duolingo tackle this problem in part by using interaction data to estimate student proficiency and recommend content. While the literature proposes a variety of algorithms for modeling student learning, there is relatively little work on principled methods for sequentially choosing items for the student to review in order to maximize learning. We study this decision problem as an instance of reinforcement learning, and draw on recent advances in training deep neural networks to learn flexible and scalable teaching policies that select the next item to review. Our primary contribution is an analysis of a model-free review scheduling algorithm for spaced repetition systems that does not explicitly model the student, and instead learns a policy that directly operates on raw observations of the study history. As a preliminary study, we train and evaluate this method using a student simulator based on cognitive models of human memory. Results show that modelfree scheduling is competitive against widely-used heuristics like SuperMemo and the Leitner system on various learning objectives and student models.",
"title": ""
},
{
"docid": "79eab4c017b0f1fb382617f72bde19e7",
"text": "To perceive the external environment our brain uses multiple sources of sensory information derived from several different modalities, including vision, touch and audition. All these different sources of information have to be efficiently merged to form a coherent and robust percept. Here we highlight some of the mechanisms that underlie this merging of the senses in the brain. We show that, depending on the type of information, different combination and integration strategies are used and that prior knowledge is often required for interpreting the sensory signals.",
"title": ""
},
{
"docid": "56fb6fe1f6999b5d7a9dab19e8b877ef",
"text": "Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images.",
"title": ""
},
{
"docid": "20830c435c95317fbd189341ff5cdebd",
"text": "Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.",
"title": ""
},
{
"docid": "c2e53358f9d78071fc5204624cf9d6ad",
"text": "This paper explores how the adoption of mobile and social computing technologies has impacted upon the way in which we coordinate social group-activities. We present a diary study of 36 individuals that provides an overview of how group coordination is currently performed as well as the challenges people face. Our findings highlight that people primarily use open-channel communication tools (e.g., text messaging, phone calls, email) to coordinate because the alternatives are seen as either disrupting or curbing to the natural conversational processes. Yet the use of open-channel tools often results in conversational overload and a significant disparity of work between coordinating individuals. This in turn often leads to a sense of frustration and confusion about coordination details. We discuss how the findings argue for a significant shift in our thinking about the design of coordination support systems.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
}
] |
scidocsrr
|
a3000a1037f4c47a0ede79d17eb0bdb4
|
Lay Theories About White Racists : What Constitutes Racism ( and What Doesn ’ t )
|
[
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
}
] |
[
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "310036a45a95679a612cc9a60e44e2e0",
"text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.",
"title": ""
},
{
"docid": "d281c9d3862c4e0988247f7fe1e8a702",
"text": "The vaginal microbial community is typically characterized by abundant lactobacilli. Lactobacillus iners, a fairly recently detected species, is frequently present in the vaginal niche. However, the role of this species in vaginal health is unclear, since it can be detected in normal conditions as well as during vaginal dysbiosis, such as bacterial vaginosis, a condition characterized by an abnormal increase in bacterial diversity and lack of typical lactobacilli. Compared to other Lactobacillus species, L. iners has more complex nutritional requirements and a Gram-variable morphology. L. iners has an unusually small genome (ca. 1 Mbp), indicative of a symbiotic or parasitic lifestyle, in contrast to other lactobacilli that show niche flexibility and genomes of up to 3-4 Mbp. The presence of specific L. iners genes, such as those encoding iron-sulfur proteins and unique σ-factors, reflects a high degree of niche specification. The genome of L. iners strains also encodes inerolysin, a pore-forming toxin related to vaginolysin of Gardnerella vaginalis. Possibly, this organism may have clonal variants that in some cases promote a healthy vagina, and in other cases are associated with dysbiosis and disease. Future research should examine this friend or foe relationship with the host.",
"title": ""
},
{
"docid": "6a6691d92503f98331ad7eed61a9c357",
"text": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.",
"title": ""
},
{
"docid": "684b9d64f4476a6b9dd3df1bd18bcb1d",
"text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.",
"title": ""
},
{
"docid": "8b57c1f4c865c0a414b2e919d19959ce",
"text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.",
"title": ""
},
{
"docid": "98e3279056e9bc15ce4b32c6dc027af9",
"text": "Publication Information Bazrafkan, Shabab , Javidnia, Hossein , Lemley, Joseph , & Corcoran, Peter (2018). Semiparallel deep neural network hybrid architecture: first application on depth from monocular camera. Journal of Electronic Imaging, 27(4), 19. doi: 10.1117/1.JEI.27.4.043041 Publisher Society of Photo-optical Instrumentation Engineers (SPIE) Link to publisher's version https://dx.doi.org/10.1117/1.JEI.27.4.043041",
"title": ""
},
{
"docid": "64139426292bc1744904a0758b6caed1",
"text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.",
"title": ""
},
{
"docid": "f4d6ff0005ecb467fc8fd3a4a9914ea7",
"text": "In this paper, the working principle of reflective memory network is introduced, reflective memory network is designed and realized, and real-time, delay determinacy and reliability of reflective memory network are tested under QNX real-time operating system. The performance tests indicate that the reflective memory network meets the demands of the real-time and dependability and improves the stability of the power-supply control system greatly.",
"title": ""
},
{
"docid": "394d30f3bd98cc0a72d940f93f0e32de",
"text": "Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "746b9e9e1fdacc76d3acb4f78d824901",
"text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.",
"title": ""
},
{
"docid": "836815216224b278df229927d825e411",
"text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.",
"title": ""
},
{
"docid": "5a06eed96bd877138e1f484b2c771c38",
"text": "This chapter presents an initial “4+1” theory of value-based software engineering (VBSE). The engine in the center is the stakeholder win-win Theory W, which addresses the questions of “which values are important?” and “how is success assured?” for a given software engineering enterprise. The four additional theories that it draws upon are utility theory (how important are the values?), decision theory (how do stakeholders’ values determine decisions?), dependency theory (how do dependencies affect value realization?), and control theory (how to adapt to change and control value realization?). After discussing the motivation and context for developing a VBSE theory and the criteria for a good theory, the chapter discusses how the theories work together into a process for defining, developing, and evolving software-intensive systems. It also illustrates the application of the theory to a supply chain system example, discusses how well the theory meets the criteria for a good theory, and identifies an agenda for further research.",
"title": ""
},
{
"docid": "1cceffd9ef0281f89fb6b7efd5d03371",
"text": "We report compact and wideband 90° hybrid with a one-way tapered 4×4 MMI waveguide. The fabricated device with a device length of 198 µm exhibited a phase deviation of <±5.4° over a 70-nm-wide spectral range.",
"title": ""
},
{
"docid": "55eb8b24baa00c38534ef0020c682fff",
"text": "NoSQL databases are designed to manage large volumes of data. Although they do not require a default schema associated with the data, they are categorized by data models. Because of this, data organization in NoSQL databases needs significant design decisions because they affect quality requirements such as scalability, consistency and performance. In traditional database design, on the logical modeling phase, a conceptual schema is transformed into a schema with lower abstraction and suitable to the target database data model. In this context, the contribution of this paper is an approach for logical design of NoSQL document databases. Our approach consists in a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. Workload information is considered to determine an optimized logical schema, providing a better access performance for the application. We evaluate our approach through a case study in the e-commerce domain and demonstrate that the NoSQL logical structure generated by our approach reduces the amount of items accessed by the application queries.",
"title": ""
},
{
"docid": "7c99299463d7f2a703f7bd9fbec3df74",
"text": "Group emotional contagion, the transfer of moods among people in a group, and its influence on work group dynamics was examined in a laboratory study of managerial decision making using multiple, convergent measures of mood, individual attitudes, behavior, and group-level dynamics. Using a 2 times 2 experimental design, with a trained confederate enacting mood conditions, the predicted effect of emotional contagion was found among group members, using both outside coders' ratings of participants' mood and participants' selfreported mood. No hypothesized differences in contagion effects due to the degree of pleasantness of the mood expressed and the energy level with which it was conveyed were found. There was a significant influence of emotional contagion on individual-level attitudes and group processes. As predicted, the positive emotional contagion group members experienced improved cooperation, decreased conflict, and increased perceived task performance. Theoretical implications and practical ramifications of emotional contagion in groups and organizations are discussed. Disciplines Human Resources Management | Organizational Behavior and Theory This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mgmt_papers/72 THE RIPPLE EFFECT: EMOTIONAL CONTAGION AND ITS INFLUENCE ON GROUP BEHAVIOR SIGAL G. BARSADE School of Management Yale University Box 208200 New Haven, CT 06520-8200 Telephone: (203) 432-6159 Fax: (203) 432-9994 E-mail: sigal.barsade@yale.edu August 2001 Revise and Resubmit, ASQ; Comments Welcome i I would like to thank my mentor Barry Staw, Charles O’Reilly, JB, Ken Craik, Batia Wiesenfeld, Jennifer Chatman, J. Turners, John Nezlek, Keith Murnigan, Linda Johanson, and three anonymous ASQ reviewers who have helped lead to positive emotional and cognitive contagion.",
"title": ""
},
{
"docid": "cf8fd0b294f7d8b75df9f54b8e89af29",
"text": "This paper reviews 138 empirical quantitative population-based studies of self-reported racism and health. These studies show an association between self-reported racism and ill health for oppressed racial groups after adjustment for a range of confounders. The strongest and most consistent findings are for negative mental health outcomes and health-related behaviours, with weaker associations existing for positive mental health outcomes, self-assessed health status, and physical health outcomes. Most studies in this emerging field have been published in the past 5 years and have been limited by a dearth of cohort studies, a lack of psychometrically validated exposure instruments, poor conceptualization and definition of racism, conflation of racism with stress, and debate about the aetiologically relevant period for self-reported racism. Future research should examine the psychometric validity of racism instruments and include these instruments, along with objectively measured health outcomes, in existing large-scale survey vehicles as well as longitudinal studies and studies involving children. There is also a need to gain a better understanding of the perception, attribution, and reporting of racism, to investigate the pathways via which self-reported racism affects health, the interplay between mental and physical health outcomes, and exposure to intra-racial, internalized, and systemic racism. Ensuring the quality of studies in this field will allow future research to reveal the complex role that racism plays as a determinant of population health.",
"title": ""
}
] |
scidocsrr
|
b3961bba6abb2aed2a4aa06f6f878de5
|
Using LLL-Reduction for Solving RSA and Factorization Problems
|
[
{
"docid": "85826e44f9b52f94a76f4baa3d18774e",
"text": "Constant round authenticated group key agreement via distributed computation p. 115 Efficient ID-based group key agreement with bilinear maps p. 130 New security results on encrypted key exchange p. 145 New results on the hardness of Diffie-Hellman bits p. 159 Short exponent Diffie-Hellman problems p. 173 Efficient signcryption with key privacy from gap Diffie-Hellman groups p. 187 Algebraic attacks over GF(2[superscript k]), application to HFE Challenge 2 and Sflash-v2 p. 201",
"title": ""
}
] |
[
{
"docid": "000bdac12cd4254500e22b92b1906174",
"text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.",
"title": ""
},
{
"docid": "8b846fef0ec1b4a3afd8f7f37e75775a",
"text": "Virtualization started to gain traction in the domain of information technology in the early 2000’s when managing resource distribution was becoming an uphill task for developers. As a result, tools like VMWare, Hyper-V (hypervisor) started making inroads into the software repository on different operating systems. VMWare and Hyper-V could support multiple virtual machines running on them with each having their own isolated environment. Due to this isolation, the security aspects of virtual machines (VMs) did not differ much from that of physical machines (having a dedicated operating system on hardware). The advancement made in the domain of linux containers (LXC) has taken virtualization to an altogether different level where resource utilization by various applications has been further optimized. But the container security has assumed primary importance amongst the researchers today and this paper is inclined towards providing a brief overview about comparisons between security of container and VMs.",
"title": ""
},
{
"docid": "be8b89fc46c919ab53abf86642bb8f8a",
"text": "us to rethink our whole value frame concerning means and ends, and the place of technology within this frame. The ambit of HCI has expanded enormously since the field’s emergence in the early 1980s. Computing has changed significantly; mobile and ubiquitous communication networks span the globe, and technology has been integrated into all aspects of our daily lives. Computing is not simply for calculating, but rather is a medium through which we collaborate and interact with other people. The focus of HCI is not so much on human-computer interaction as it is on human activities mediated by computing [1]. Just as the original meaning of ACM (Association for Computing Machinery) has become dated, perhaps so too has the original meaning of HCI (humancomputer interaction). It is time for us to rethink how we approach issues of people and technology. In this article I explore how we might develop a more humancentered approach to computing. for the 21st century, centered on the exploration of new forms of living with and through technologies that give primacy to human actors, their values, and their activities. The area of concern is much broader than the simple “fit” between people and technology to improve productivity (as in the classic human factors mold); it encompasses a much more challenging territory that includes the goals and activities of people, their values, and the tools and environments that help shape their everyday lives. We have evermore sophisticated and complex technologies available to us in the home, at work, and on the go, yet in many cases, rather than augmenting our choices and capabilities, this plethora of new widgets and systems seems to confuse us—or even worse, disable us. (Surely there is something out of control when a term such as “IT disability” can be taken seriously in national research programs.) Solutions do not reside simply in ergonomic corrections to the interface, but instead require Some years ago, HCI researcher Panu Korhonen of Nokia outlined to me how HCI is changing, as follows: In the early days the Nokia HCI people were told “Please evaluate our user interface, and make it easy to use.” That gave way to “Please help us design this user interface so that it is easy to use.” That, in turn, led to a request: “Please help us find what the users really need so that we know how to design this user interface.” And now, the engineers are pleading with us: “Look at this area of",
"title": ""
},
{
"docid": "886e88c878bae3c56fc81e392cecd1c9",
"text": "This review summarizes data from the numerous investigations from the beginning of the last century to the present. The studies concerned the main issues of the morphology, the life cycle, hosts and localization of Hepatozoon canis (phylum Apicomplexa, suborder Adeleorina, family Hepatozoidae). The characteristic features of hepatozoonosis, caused by Hepatozoon canis in the dog, are evaluated. A survey of clinical signs, gross pathological changes, epidemiology, diagnosis and treatment of the disease was made. The measures for prevention of Hepatozoon canis infection in animals are listed. The importance of hepatozoonosis with regard to public health was evaluated. The studies on the subject, performed in Bulgaria, are discussed.",
"title": ""
},
{
"docid": "b26c2a76a1a64aa98ac5c380947dcf4d",
"text": "The GPML toolbox provides a wide range of functionality for G aussian process (GP) inference and prediction. GPs are specified by mean and covariance func tions; we offer a library of simple mean and covariance functions and mechanisms to compose mor e complex ones. Several likelihood functions are supported including Gaussian and heavytailed for regression as well as others suitable for classification. Finally, a range of inference m thods is provided, including exact and variational inference, Expectation Propagation, and Lapl ace’s method dealing with non-Gaussian likelihoods and FITC for dealing with large regression task s.",
"title": ""
},
{
"docid": "2b3851ac0d4202a90896d160523bedc3",
"text": "Crying is a communication method used by infants given the limitations of language. Parents or nannies who have never had the experience to take care of the baby will experience anxiety when the infant is crying. Therefore, we need a way to understand about infant's cry and apply the formula. This research develops a system to classify the infant's cry sound using MACF (Mel-Frequency Cepstrum Coefficients) feature extraction and BNN (Backpropagation Neural Network) based on voice type. It is classified into 3 classes: hungry, discomfort, and tired. A voice input must be ascertained as infant's cry sound which using 3 features extraction (pitch with 2 approaches: Modified Autocorrelation Function and Cepstrum Pitch Determination, Energy, and Harmonic Ratio). The features coefficients of MFCC are furthermore classified by Backpropagation Neural Network. The experiment shows that the system can classify the infant's cry sound quite well, with 30 coefficients and 10 neurons in the hidden layer.",
"title": ""
},
{
"docid": "ef345b834b801a36b88d3f462f7c2a0e",
"text": "At the global level of the Big Five, Extraversion and Neuroticism are the strongest predictors of life satisfaction. However, Extraversion and Neuroticism are multifaceted constructs that combine more specific traits. This article examined the contribution of facets of Extraversion and Neuroticism to life satisfaction in four studies. The depression facet of Neuroticism and the positive emotions/cheerfulness facet of Extraversion were the strongest and most consistent predictors of life satisfaction. These two facets often accounted for more variance in life satisfaction than Neuroticism and Extraversion. The findings suggest that measures of depression and positive emotions/cheerfulness are necessary and sufficient to predict life satisfaction from personality traits. The results also lead to a more refined understanding of the specific personality traits that influence life satisfaction: Depression is more important than anxiety or anger and a cheerful temperament is more important than being active or sociable.",
"title": ""
},
{
"docid": "19aa8d26eae39aa1360aba38aaefc29e",
"text": "We present a matrix factorization model inspired by challenges we encountered while working on the Xbox movies recommendation system. The item catalog in a recommender system is typically equipped with meta-data features in the form of labels. However, only part of these features are informative or useful with regard to collaborative filtering. By incorporating a novel sparsity prior on feature parameters, the model automatically discerns and utilizes informative features while simultaneously pruning non-informative features.\n The model is designed for binary feedback, which is common in many real-world systems where numeric rating data is scarce or non-existent. However, the overall framework is applicable to any likelihood function. Model parameters are estimated with a Variational Bayes inference algorithm, which is robust to over-fitting and does not require cross-validation and fine tuning of regularization coefficients. The efficacy of our method is illustrated on a sample from the Xbox movies dataset as well as on the publicly available MovieLens dataset. In both cases, the proposed solution provides superior predictive accuracy, especially for long-tail items. We then demonstrate the feature selection capabilities and compare against the common case of simple Gaussian priors. Finally, we show that even without features, our model performs better than a baseline model trained with the popular stochastic gradient descent approach.",
"title": ""
},
{
"docid": "2621777d5f39092295c3f7c548b255f8",
"text": "Caller ID (caller identification) is a service provided by telephone operators where the phone number and/or the name of the caller is transmitted to inform the callee who is calling. Today, most people trust the caller ID information and some banks even use Caller ID to authenticate customers. However, with the proliferation of smartphones and VoIP, it is easy to spoof caller ID information by installing a particular application on the smartphone or by using service providers that offer Caller ID spoofing. As the phone network is fragmented between countries and companies and upgrades of old hardware is costly, no mechanism is available today to let end-users easily detect Caller ID spoofing attacks. In this article, we propose a new approach of using end-to-end caller ID verification schemes that leverage features of the existing phone network infrastructure (CallerDec ). We design an SMS-based and a timing-based version of CallerDec that works with existing combinations of landlines, cellular and VoIP networks and can be deployed at the liberty of the users. We implemented both CallerDec schemes as an App for Android-based phones and validated their effectiveness in detecting spoofing attacks in various scenarios.",
"title": ""
},
{
"docid": "5a3f542176503ddc6fcbd0fe29f08869",
"text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.",
"title": ""
},
{
"docid": "28ff277338f2c8441bad7820706acaae",
"text": "Topological insulators (TIs) are characterized by possessing metallic (gapless) surface states and a finite band-gap state in the bulk. As the thickness of a TI layer decreases down to a few nanometers, hybridization between the top and bottom surfaces takes place due to quantum tunneling, consequently at a critical thickness a crossover from a 3D-TI to a 2D insulator occurs. Although such a crossover is generally accessible by scanning tunneling microscopy, or by angle-resolved photoemission spectroscopy, such measurements require clean surfaces. Here, we demonstrate that a cascading nonlinear magneto-optical effect induced via strong spin-orbit coupling can examine such crossovers. The helicity dependence of the time-resolved Kerr rotation exhibits a robust change in periodicity at a critical thickness, from which it is possible to predict the formation of a Dirac cone in a film several quintuple layers thick. This method enables prediction of a Dirac cone using a fundamental nonlinear optical effect that can be applied to a wide range of TIs and related 2D materials.",
"title": ""
},
{
"docid": "e719e42bd2465c13e368cc16e80c106a",
"text": "The health, education, and other service applications for robots that assist through primarily social rather than physical interaction are rapidly growing, and so is the research into such technologies. Socially assistive robotics (SAR) aims to address critical areas and gaps in care by automating supervision, coaching, motivation, and companionship aspects of one-on-one interactions with individuals from various large and growing populations, including stroke survivors, the elderly and individuals with dementia, children with autism spectrum disorders, among many others. In this way, roboticists hope to improve the standard of care for large user groups. Naturally, SAR systems pose several ethical challenges regarding their design, implementation, and deployment. This paper examines the ethical challenges of socially assistive robotics from three points of view (user, caregiver, peer) using core principles from medical ethics (autonomy, beneficence, non-maleficence, justice) to determine how intended and unintended effects of a SAR can impact the delivery of care.",
"title": ""
},
{
"docid": "673c2c8ea409d42fdc96c7e706771d8a",
"text": "Saccadic eye movements are an integral part of many visually guided behaviors. Recent research in humans has shown that processes which control saccades are also involved in establishing perceptual space: A shift in object localization during fixation occurred after saccade amplitudes had been shortened or lengthened by saccadic adaptation. We tested whether similar effects can be established in nonhuman primates. Two trained macaque monkeys localized briefly presented stimuli on a touch screen by indicating the memorized target position with the hand on the screen. The monkeys performed this localization task before and after saccade amplitudes were modified through saccadic adaptation. During localization trials they had to maintain fixation. Successful saccadic adaptation led to a concurrent shift of the touched position on the screen. This mislocalization occurred for both adaptive shortening and lengthening of saccade amplitude. We conclude that saccadic adaptation has the potential to influence localization performance in monkeys, similar to the results found in humans.",
"title": ""
},
{
"docid": "d6b6cbfa8c872b9f9066ea7beda2d2e4",
"text": "Computer Science (CS) Unplugged activities have been deployed in many informal settings to present computing concepts in an engaging manner. To justify use in the classroom, however, it is critical for activities to have a strong educational component. For the past three years, we have been developing and refining a CS Unplugged curriculum for use in middle school classrooms. In this paper, we describe an assessment that maps questions from a comprehensive project to computational thinking (CT) skills and Bloom's Taxonomy. We present results from two different deployments and discuss limitations and implications of our approach.",
"title": ""
},
{
"docid": "e2c2cdb5245b73b7511c434c4901fff8",
"text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.",
"title": ""
},
{
"docid": "87aee7d33e78a427edb29126d1ca50c6",
"text": "We present the group fused Lasso for detection of multiple ch ange-points shared by a set of co-occurring one-dimensional signals. Change-points are det cted by approximating the original signals with a constraint on the multidimensional total var iation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the res ulting optimization problems, either exactly or approximately. Conditions are given for consist ency of both algorithms as the number of signals increases, and empirical evidence is provided to su pport the results on simulated and array comparative genomic hybridization data.",
"title": ""
},
{
"docid": "09a50c87a1aa9f4ef8935049d9578963",
"text": "Online taxicab platforms like DiDi and Uber have impacted hundreds of millions of users on their choices of traveling, but how do users feel about the ride-sharing services, and how to improve their experience? While current ride-sharing services have collected massive travel data, it remains challenging to develop data-driven techniques for modeling and predicting user ride experience. In this work, we aim to accurately predict passenger satisfaction over their rides and understand the key factors that lead to good/bad experiences. Based on in-depth analysis of large-scale travel data from a popular taxicab platform in China, we develop PHINE (Pattern-aware Heterogeneous Information Network Embedding) for data-driven user experience modeling. Our PHINE framework is novel in that it is composed of spatial-temporal node binding and grouping for addressing the inherent data variation, and pattern preservation based joint training for modeling the interactions among drivers, passengers, locations, and time. Extensive experiments on 12 real-world travel datasets demonstrate the effectiveness of PHINE over strong baseline methods. We have deployed PHINE in the DiDi Big Data Center, delivering high-quality predictions for passenger satisfaction on a daily basis.",
"title": ""
},
{
"docid": "3c44f2bf1c8a835fb7b86284c0b597cd",
"text": "This paper explores some of the key electromagnetic design aspects of a synchronous reluctance motor that is equipped with single-tooth windings (i.e., fractional slot concentrated windings). The analyzed machine, a 6-slot 4-pole motor, utilizes a segmented stator core structure for ease of coil winding, pre-assembly, and facilitation of high slot fill factors (~60%). The impact on the motors torque producing capability and its power factor of these inter-segment air gaps between the stator segments is investigated through 2-D finite element analysis (FEA) studies where it is shown that they have a low impact. From previous studies, torque ripple is a known issue with this particular slot–pole combination of synchronous reluctance motor, and the use of two different commercially available semi-magnetic slot wedges is investigated as a method to improve torque quality. An analytical analysis of continuous rotor skewing is also investigated as an attempt to reduce the torque ripple. Finally, it is shown that through a combination of 2-D and 3-D FEA studies in conjunction with experimentally derived results on a prototype machine that axial fringing effects cannot be ignored when predicting the q-axis reactance in such machines. A comparison of measured orthogonal axis flux linkages/reactances with 3-D FEA studies is presented for the first time.",
"title": ""
},
{
"docid": "161c79eeb01624c497446cb2c51f3893",
"text": "In this article, results of a German nationwide survey (KFN schools survey 2007/2008) are presented. The controlled sample of 44,610 male and female ninth-graders was carried out in 2007 and 2008 by the Criminological Research Institute of Lower Saxony (KFN). According to a newly developed screening instrument (KFN-CSAS-II), which was presented to every third juvenile participant (N = 15,168), 3% of the male and 0.3% of the female students are diagnosed as dependent on video games. The data indicate a clear dividing line between extensive gaming and video game dependency (VGD) as a clinically relevant phenomenon. VGD is accompanied by increased levels of psychological and social stress in the form of lower school achievement, increased truancy, reduced sleep time, limited leisure activities, and increased thoughts of committing suicide. In addition, it becomes evident that personal risk factors are crucial for VGD. The findings indicate the necessity of additional research as well as the respective measures in the field of health care policies.",
"title": ""
}
] |
scidocsrr
|
e6daa51a4ccdd300fbcba652271e3acb
|
Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions
|
[
{
"docid": "d7793313ab21020e79e41817b8372ee8",
"text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.",
"title": ""
},
{
"docid": "6664ed79a911247b401a4bd0b2cc619c",
"text": "Extracting good representations from images is essential for many computer vision tasks. In this paper, we propose hierarchical matching pursuit (HMP), which builds a feature hierarchy layer-by-layer using an efficient matching pursuit encoder. It includes three modules: batch (tree) orthogonal matching pursuit, spatial pyramid max pooling, and contrast normalization. We investigate the architecture of HMP, and show that all three components are critical for good performance. To speed up the orthogonal matching pursuit, we propose a batch tree orthogonal matching pursuit that is particularly suitable to encode a large number of observations that share the same large dictionary. HMP is scalable and can efficiently handle full-size images. In addition, HMP enables linear support vector machines (SVM) to match the performance of nonlinear SVM while being scalable to large datasets. We compare HMP with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer sparse coding, and kernel based feature learning. HMP consistently yields superior accuracy on three types of image classification problems: object recognition (Caltech-101), scene recognition (MIT-Scene), and static event recognition (UIUC-Sports).",
"title": ""
}
] |
[
{
"docid": "1a4cb9038d3bd71ecd24187ed860e0f7",
"text": "One of the most important fields in discrete mathematics is graph theory. Graph theory is discrete structures, consisting of vertices and edges that connect these vertices. Problems in almost every conceivable discipline can be solved using graph models. The field graph theory started its journey from the problem of Konigsberg Bridges in 1735. This paper is a guide for the applied mathematician who would like to know more about network security, cryptography and cyber security based of graph theory. The paper gives a brief overview of the subject and the applications of graph theory in computer security, and provides pointers to key research and recent survey papers in the area.",
"title": ""
},
{
"docid": "00eb132ce5063dd983c0c36724f82cec",
"text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.",
"title": ""
},
{
"docid": "66e7979aff5860f713dffd10e98eed3d",
"text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1",
"title": ""
},
{
"docid": "31da7b5b403ca92dde4d4c590a900aa1",
"text": "In this paper, a new approach for moving an inpipe robot inside underground urban gas pipelines is proposed. Since the urban gas supply system is composed of complicated configurations of pipelines, the inpipe inspection requires a robot with outstanding mobility and corresponding control algorithms to apply for. In advance, this paper introduces a new miniature miniature inpipe robot, called MRINSPECT (Multifunctional Robotic crawler for INpipe inSPECTion) IV, which has been developed for the inspection of urban gas pipelines with a nominal 4-inch inside diameter. Its mechanism for steering with differential–drive wheels arranged three-dimensionally makes itself easily adjust to most pipeline configurations and provides excellent mobility in navigation. Also, analysis for pipelines with fittings are given in detail and geometries of the fittings are mathematically described. It is prerequisite to estimate moving pattern of the robot while passing through the fittings and based on the analysis, a method modulating speed of each drive wheel is proposed. Though modulation of speed is very important during proceeding thought the fittings, it is not easy to control the speeds because each wheel of the robot has contact with the walls having different curvatures. A new and simple way of controlling the speed is developed based on the analysis of the geometrical features of the fittings. This algorithm has the advantage to be applicable without using complicated sensor information. To confirm the effectiveness of the proposed method experiments are performed and additional considerations for the design of an inpipe robot are discussed.",
"title": ""
},
{
"docid": "e525a752409edc5165cfafed08ec6e57",
"text": "In this paper, we propose a recurrent neural network architecture for early sequence classification, when the model is required to output a label as soon as possible with negligible decline in accuracy. Our model is capable of learning how many sequence tokens it needs to observe in order to make a prediction; moreover, the number of steps required differs for each sequence. Experiments on sequential MNIST show that the proposed architecture focuses on different sequence parts during inference, which correspond to contours of the handwritten digits. We also demonstrate the improvement in the prediction quality with a simultaneous reduction in the prefix size used, the extent of which depends on the distribution of distinct class features over time.",
"title": ""
},
{
"docid": "db95a67e1c532badd3ec97a31170bb0c",
"text": "The named entity recognition task aims at identifying and classifying named entities within an open-domain text. This task has been garnering significant attention recently as it has been shown to help improve the performance of many natural language processing applications. In this paper, we investigate the impact of using different sets of features in three discriminative machine learning frameworks, namely, support vector machines, maximum entropy and conditional random fields for the task of named entity recognition. Our language of interest is Arabic. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We measure the impact of the different features in isolation and incrementally combine them in order to evaluate the robustness to noise of each approach. We achieve the highest performance using a combination of 15 features in conditional random fields using broadcast news data (Fbeta = 1=83.34).",
"title": ""
},
{
"docid": "2f20f587bb46f7133900fd8c22cea3ab",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "14739a86487a26452bd73da11264b9e4",
"text": "This paper presents a systematic online prediction method (Social-Forecast) that is capable to accurately forecast the popularity of videos promoted by social media. Social-Forecast explicitly considers the dynamically changing and evolving propagation patterns of videos in social media when making popularity forecasts, thereby being situation and context aware. Social-Forecast aims to maximize the forecast reward, which is defined as a tradeoff between the popularity prediction accuracy and the timeliness with which a prediction is issued. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance. In addition, we conduct extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing view-based approaches for popularity prediction (which are not context-aware) by more than 30% in terms of prediction rewards.",
"title": ""
},
{
"docid": "08b01274311a5c07d726171f52a8513e",
"text": "This paper presents a brief introduction to Vapnik-Chervonenkis (VC) dimension, a quantity which characterizes the difficulty of distribution-independent learning. The paper establishes various elementary results, and discusses how to estimate the VC dimension in several examples of interest in neural network theory.",
"title": ""
},
{
"docid": "7fe2fa777e4206d7a57e785369e98aba",
"text": "A new class of three-dimensional (3-D) bandpass frequency-selective structures (FSSs) with multiple transmission zeros is presented to realize wide out-of-band rejection. The proposed FSSs are based on a two-dimensional (2-D) array of shielded microstrip lines with shorting via to ground, where two different resonators in the substrate are constructed based on the excited substrate mode. Furthermore, metallic plates of rectangular shape and “T-type” are inserted in the air region of shielded microstrip lines, which can introduce additional resonators provided by the air mode. Using this arrangement, a passband with two transmission poles can be obtained. Moreover, multiple transmission zeros outside the passband are produced for improving the out-of-band rejection. The operating principles of these FSSs are explained with the aid of equivalent circuit models. Two examples are designed, fabricated, and measured to verify the proposed structures and circuit models. Measured results demonstrate that the FSSs exhibit high out-of-band rejection and stable filtering response under a large variation of the incidence angle.",
"title": ""
},
{
"docid": "471eca6664d0ae8f6cdfb848bc910592",
"text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.",
"title": ""
},
{
"docid": "031dbd65ecb8d897d828cd5d904059c1",
"text": "Especially in ill-defined problems like complex, real-world tasks more than one way leads to a solution. Until now, the evaluation of information visualizations was often restricted to measuring outcomes only (time and error) or insights into the data set. A more detailed look into the processes which lead to or hinder task completion is provided by analyzing users' problem solving strategies. A study illustrates how they can be assessed and how this knowledge can be used in participatory design to improve a visual analytics tool. In order to provide the users a tool which functions as a real scaffold, it should allow them to choose their own path to Rome. We discuss how evaluation of problem solving strategies can shed more light on the users' \"exploratory minds\".",
"title": ""
},
{
"docid": "aecaa8c028c4d1098d44d755344ad2fc",
"text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.",
"title": ""
},
{
"docid": "375de005698ccaf54d7b82875f1f16c5",
"text": "This paper describes design, Simulation and manufacturing procedures of HIRAD - a teleoperated Tracked Surveillance UGV for military, Rescue and other civilian missions in various hazardous environments. A Double Stabilizer Flipper mechanism mounted on front pulleys enables the Robot to have good performance in travelling over uneven terrains and climbing stairs. Using this Stabilizer flipper mechanism reduces energy consumption while climbing the stairs or crossing over obstacles. The locomotion system mechanical design is also described in detail. The CAD geometry 3D-model has been produced by CATIA software. To analyze the system mobility, a virtual model was developed with ADAMS Software. This simulation included different mobility maneuvers such as stair climbing, gap crossing and travelling over steep slopes. The simulations enabled us to define motor torque requirements. We performed many experiments with manufactured prototype under various terrain conditions Such as stair climbing, gap crossing and slope elevation. In experiments, HIRAD shows good overcoming ability for the tested terrain conditions.",
"title": ""
},
{
"docid": "90a3dd2bc75817a49a408e7666660e29",
"text": "RATIONALE\nPulmonary arterial hypertension (PAH) is an orphan disease for which the trend is for management in designated centers with multidisciplinary teams working in a shared-care approach.\n\n\nOBJECTIVE\nTo describe clinical and hemodynamic parameters and to provide estimates for the prevalence of patients diagnosed for PAH according to a standardized definition.\n\n\nMETHODS\nThe registry was initiated in 17 university hospitals following at least five newly diagnosed patients per year. All consecutive adult (> or = 18 yr) patients seen between October 2002 and October 2003 were to be included.\n\n\nMAIN RESULTS\nA total of 674 patients (mean +/- SD age, 50 +/- 15 yr; range, 18-85 yr) were entered in the registry. Idiopathic, familial, anorexigen, connective tissue diseases, congenital heart diseases, portal hypertension, and HIV-associated PAH accounted for 39.2, 3.9, 9.5, 15.3, 11.3, 10.4, and 6.2% of the population, respectively. At diagnosis, 75% of patients were in New York Heart Association functional class III or IV. Six-minute walk test was 329 +/- 109 m. Mean pulmonary artery pressure, cardiac index, and pulmonary vascular resistance index were 55 +/- 15 mm Hg, 2.5 +/- 0.8 L/min/m(2), and 20.5 +/- 10.2 mm Hg/L/min/m(2), respectively. The low estimates of prevalence and incidence of PAH in France were 15.0 cases/million of adult inhabitants and 2.4 cases/million of adult inhabitants/yr. One-year survival was 88% in the incident cohort.\n\n\nCONCLUSIONS\nThis contemporary registry highlights current practice and shows that PAH is detected late in the course of the disease, with a majority of patients displaying severe functional and hemodynamic compromise.",
"title": ""
},
{
"docid": "0575f79872ffd036d48efa731bc451e1",
"text": "When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others. The goal of this paper is to bring to the attention of the vision community the following considerations: (1) some examples are better than others for training detectors or classifiers, and (2) in the presence of better examples, some examples may negatively impact performance and removing them may be beneficial. In this paper, we propose an approach for measuring the training value of an example, and use it for ranking and greedily sorting examples. We test our methods on different vision tasks, models, datasets and classifiers. Our experiments show that the performance of current state-of-the-art detectors and classifiers can be improved when training on a subset, rather than the whole training set.",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
},
{
"docid": "96fbd665c43461b7cd8bbbe1f0aa43e4",
"text": "Inductor current sensing is becoming widely used in current programmed controllers for microprocessor applications. This method exploits a low-pass filter in parallel with the inductor to provide lossless current sense. A major drawback of inductor current sensing is that accurate sense the DC and AC components of the current signal requires precise matching between the low-pass filter time constant and the inductor time constant (L/RL). However, matching accuracy depends on the tolerance of the components and on the operating conditions; therefore it can hardly be guaranteed. To overcome this problem, a novel digital auto-tuning system is proposed that automatically compensates any time constant mismatch. This auto-tuning system has been developed for VRM current programmed controllers. It makes it possible to meet the adaptive voltage positioning requirements using conventional and low cost components, and to solve problems such as aging effects, temperature variations and process tolerances as well. A prototype of the auto-tuning system based on an FPGA and a commercial DC/DC controller has been designed and tested. The experimental results fully confirmed the effectiveness of the proposed method, showing an improvement of the current sense precision from about 30% up to 4%. This innovative solution is suitable to fulfill the challenging accuracy specifications required by the future VRM applications",
"title": ""
},
{
"docid": "66255dc6c741737b3576e7ddefec96ce",
"text": "Neural Machine Translation (NMT) with source side attention have achieved remarkable performance. however, there has been little work exploring to attend to the target side which can potentially enhance the memory capbility of NMT. We reformulate a Decoding-History Enhanced Attention mechanism (DHEA) to render NMT model better at selecting both source side and target side information. DHEA enables a dynamic control on the ratios at which source and target contexts contribute to the generation of target words, offering a way to weakly induce structure relations among both source and target tokens. It also allows training errors to be directly back-propagated through short-cut connections and effectively alleviates the gradient vanishing problem. The empirical study on Chinese-English translation shows that our model with proper configuration can improve by 0.9 BLEU upon Transformer and achieve the best reported results in the same dataset. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art NMT systems.",
"title": ""
},
{
"docid": "f1dc40c02d162988ca118c6e4d15ad06",
"text": "Spheres are popular geometric primitives found in many manufactured objects. However, sphere fitting and extraction have not been investigated in depth. In this paper, a robust method is proposed to extract multiple spheres accurately and simultaneously from unorganized point clouds. Moreover, a novel validation step is presented to assess the quality of the detected spheres, which help remove the confusion between perfect spheres and sphere-like shapes such as ellipsoids and paraboloids. A novel sampling strategy is introduced to reduce computational burden for sphere extraction. Experiments on both synthetic and scanned point clouds with different levels of noise and outliers are conducted and the results compared to state-of-the-art methods. These experiments demonstrate the efficiency and robustness of the proposed sphere extraction method.",
"title": ""
}
] |
scidocsrr
|
a97813f7695b044e2538b92cbaa58f34
|
Cost-Effective Resource Provisioning for MapReduce in a Cloud
|
[
{
"docid": "e4007c7e6a80006238e1211a213e391b",
"text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.",
"title": ""
}
] |
[
{
"docid": "c1e0b1c318f73187c75be26f66d95632",
"text": "Newly emerged gallium nitride (GaN) devices feature ultrafast switching speed and low on-state resistance that potentially provide significant improvements for power converters. This paper investigates the benefits of GaN devices in an LLC resonant converter and quantitatively evaluates GaN devices' capabilities to improve converter efficiency. First, the relationship of device and converter design parameters to the device loss is established based on an analytical model of LLC resonant converter operating at the resonance. Due to the low effective output capacitance of GaN devices, the GaN-based design demonstrates about 50% device loss reduction compared with the Si-based design. Second, a new perspective on the extra transformer winding loss due to the asymmetrical primary-side and secondary-side current is proposed. The device and design parameters are tied to the winding loss based on the winding loss model in the finite element analysis (FEA) simulation. Compared with the Si-based design, the winding loss is reduced by 18% in the GaN-based design. Finally, in order to verify the GaN device benefits experimentally, 400- to 12-V, 300-W, 1-MHz GaN-based and Si-based LLC resonant converter prototypes are built and tested. One percent efficiency improvement, which is 24.8% loss reduction, is achieved in the GaN-based converter.",
"title": ""
},
{
"docid": "ef409ee79d73f9294daa8ac981de7a6d",
"text": "In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation.\n We prove that the AIM problem is NP-hard to approximate to within any constant factor via a reduction from Feige's k-prover proof system for 3-SAT5. We also give evidence that even when the social network graph is trivial (i.e. has no edges), a polynomial time constant factor approximation for AIM is unlikely. However, when we assume that the weighted bi-adjacency matrix that describes the influence of content providers on consumers is of constant rank, a common assumption often used in recommender systems, we provide a polynomial-time algorithm that achieves approximation ratio of (1-1/e-ε)3 for any (polynomially small) ε > 0. Our algorithmic results still hold for a more general model where cascades in social network follow a general monotone and submodular function.",
"title": ""
},
{
"docid": "81e0cc5f85857542c039b0c5fe80e010",
"text": "This paper proposes a pitch estimation algorithm that is based on optimal harmonic model fitting. The algorithm operates directly on the time-domain signal and has a relatively simple mathematical background. To increase its efficiency and accuracy, the algorithm is applied in combination with an autocorrelation-based initialization phase. For testing purposes we compare its performance on pitch-annotated corpora with several conventional time-domain pitch estimation algorithms, and also with a recently proposed one. The results show that even the autocorrelation-based first phase significantly outperforms the traditional methods, and also slightly the recently proposed yin algorithm. After applying the second phase – the harmonic approximation step – the amount of errors can be further reduced by about 20% relative to the error obtained in the first phase.",
"title": ""
},
{
"docid": "dd0bbc039e1bbc9e36ffe087e105cf56",
"text": "Using a comparative analysis approach, this article examines the development, characteristics and issues concerning the discourse of modern Asian art in the twentieth century, with the aim of bringing into picture the place of Asia in the history of modernism. The wide recognition of the Western modernist canon as centre and universal displaces the contribution and significance of the non-Western world in the modern movement. From a cross-cultural perspective, this article demonstrates that modernism in the field of visual arts in Asia, while has had been complex and problematic, nevertheless emerged. Rather than treating Asian art as a generalized subject, this article argues that, with their subtly different notions of culture, identity and nationhood, the modernisms that emerged from various nations in this region are diverse and culturally specific. Through the comparison of various art-historical contexts in this region (namely China, India, Japan and Korea), this article attempts to map out some similarities as well as differences in their pursuit of an autonomous modernist representation.",
"title": ""
},
{
"docid": "11004995f1ca07cd9fc721593c1c79a3",
"text": "This paper presents an efficient farfield simulation, exploiting and linking the strength of three commercial simulation tools. For many practical array and multiport antenna designs it is essential to examine the farfield for a general port excitation or termination scenario. Some examples are phased array designs, problems related to mutual coupling and scan blindness, tuning of parasitic elements, MIMO antennas and correlation. The proposed method fully characterizes the nearfield and the farfield of the antenna, so to compute farfield patterns by means of superposition for any voltage/current state at the port terminals. A recently published low-cost patch antenna phased array with analog beam steering by another group was found very suitable to demonstrate the proposed method.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "ad9c5cbb46a83e2b517fb548baf83ce0",
"text": "Single-carrier frequency division multiple access (SC-FDMA) has been selected as the uplink access scheme in the UTRA Long Term Evolution (LTE) due to its low peak-to-average power ratio properties compared to orthogonal frequency division multiple access. Nevertheless, in order to achieve such a benefit, it requires a localized allocation of the resource blocks, which naturally imposes a severe constraint on the scheduler design. In this paper, three new channel-aware scheduling algorithms for SC-FDMA are proposed and evaluated in both local and wide area scenarios. Whereas the first maximum expansion (FME) and the recursive maximum expansion (RME) are relative simple solutions to the above-mentioned problem, the minimum area-difference to the envelope (MADE) is a more computational expensive approach, which, on the other hand, performs closer to the optimal combinatorial solution. Simulation results show that adopting a proportional fair metric all the proposed algorithms quickly reach a high level of data-rate fairness. At the same time, they definitely outperform the round-robin scheduling in terms of cell spectral efficiency with gains up to 68.8% in wide area environments.",
"title": ""
},
{
"docid": "3e749b561a67f2cc608f40b15c71098d",
"text": "As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language OWL) do not allow for the representation of concepts in terms of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions.",
"title": ""
},
{
"docid": "60c887b5df030cc35ad805494d0d8c57",
"text": "Robots typically possess sensors of different modalities, such as colour cameras, inertial measurement units, and 3D laser scanners. Often, solving a particular problem becomes easier when more than one modality is used. However, while there are undeniable benefits to combine sensors of different modalities the process tends to be complicated. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy as understanding the scene is the first step to reason about future situations. Scene segmentation is commonly performed using either image data or 3D point cloud data. In computer vision many successful methods for scene segmentation are based on conditional random fields (CRF) where the maximum a posteriori (MAP) solution to the segmentation can be obtained by inference. In this paper we devise a new CRF inference method for scene segmentation that incorporates global constraints, enforcing the sets of nodes are assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose MAP solution is found using a gradient-based optimisation approach. The proposed method is evaluated on images and 3D point cloud data gathered in urban environments where image data provides the appearance features needed by the CRF, while the 3D point cloud data provides global spatial constraints over sets of nodes. Comparisons with belief propagation, conventional quadratic programming relaxation, and higher order potential CRF show the benefits of the proposed method.",
"title": ""
},
{
"docid": "9b70f2d928abefa3512cbcb97ab63abb",
"text": "Converging evidence suggests that each parahippocampal and hippocampal subregion contributes uniquely to the encoding, consolidation and retrieval of declarative memories, but their precise roles remain elusive. Current functional thinking does not fully incorporate the intricately connected networks that link these subregions, owing to their organizational complexity; however, such detailed anatomical knowledge is of pivotal importance for comprehending the unique functional contribution of each subregion. We have therefore developed an interactive diagram with the aim to display all of the currently known anatomical connections of the rat parahippocampal–hippocampal network. In this Review, we integrate the existing anatomical knowledge into a concise description of this network and discuss the functional implications of some relatively underexposed connections.",
"title": ""
},
{
"docid": "c10c8708b35aeac01d59ffe2c1d64f3e",
"text": "Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially \"wise,\" knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The \"social influence effect\" diminishes the diversity of the crowd without improvements of its collective error. The \"range reduction effect\" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The \"confidence effect\" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.",
"title": ""
},
{
"docid": "c0767c58b4a5e81ddc35d045ccaa137f",
"text": "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.",
"title": ""
},
{
"docid": "017d1bb9180e5d1f8a01604630ebc40d",
"text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "9cab2a46c4189ebd0b67edbe5558d305",
"text": "We provide approximation algorithms for several variants of the Firefighter problem on general graphs. The Firefighter problem models the case where an infection or another diffusive process (such as an idea, a computer virus, or a fire) is spreading through a network, and our goal is to stop this infection by using targeted vaccinations. Specifically, we are allowed to vaccinate at most B nodes per time-step (for some budget B), with the goal of minimizing the effect of the infection. The difficulty of this problem comes from its temporal component, since we must choose nodes to vaccinate at every time-step while the infection is spreading through the network, leading to notions of “cuts over time”. We consider two versions of the Firefighter problem: a “non-spreading” model, where vaccinating a node means only that this node cannot be infected; and a “spreading” model where the vaccination itself is an infectious process, such as in the case where the infection is a harmful idea, and the vaccine to it is another infectious idea. We give complexity and approximation results for problems on both models.",
"title": ""
},
{
"docid": "2d42dfd45c0759cd795896179eea113c",
"text": "We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fullyconnected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods.",
"title": ""
},
{
"docid": "9ae780074520445bfe0df79532ee1c0d",
"text": "We propose a technique for achieving scalable blockchain consensus by means of a “sample-and-fallback” game: split transactions up into collations affecting small portions of the blockchain state, and require that in order for a collation of transactions to be valid, it must be approved by a randomly selected fixed-size sample taken from a large validator pool. In the exceptional case that a bad collation does pass through employ a mechanism by which a node can “challenge” an invalid collation and escalate the decision to a much larger set of validators. Our scheme is designed as a generalized overlay that can be applied to any underlying blockchain consensus algorithm (e.g. proof of work, proof of stake, social-network consensus, M-of-N semi-trusted validators) and almost any state transition function, provided that state changes are sufficiently “localized”. Our basic designs allow for a network with nodes bounded by O(N) computational power to process a transaction load and state size of O(N2− ), though we also propose an experimental “stacking” strategy for achieving arbitrary scalability guarantees up to a maximum of O(exp(N/k)) transactional load.",
"title": ""
},
{
"docid": "8aaa4ab4879ad55f43114cf8a0bd3855",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "23989e6276ad8e60b0a451e3e9d5fe50",
"text": "The significant benefits associated with microgrids have led to vast efforts to expand their penetration in electric power systems. Although their deployment is rapidly growing, there are still many challenges to efficiently design, control, and operate microgrids when connected to the grid, and also when in islanded mode, where extensive research activities are underway to tackle these issues. It is necessary to have an across-the-board view of the microgrid integration in power systems. This paper presents a review of issues concerning microgrids and provides an account of research in areas related to microgrids, including distributed generation, microgrid value propositions, applications of power electronics, economic issues, microgrid operation and control, microgrid clusters, and protection and communications issues.",
"title": ""
},
{
"docid": "b836df8acd489acae10dbd8d58f6a8b3",
"text": "This paper presents a benchmark dataset for the task of inter-sentence relation extraction. The paper explains the distant supervision method followed for creating the dataset for inter-sentence relation extraction, involving relations previously used for standard intrasentence relation extraction task. The study evaluates baseline models such as bag-of-words and sequence based recurrent neural network models on the developed dataset and shows that recurrent neural network models are more useful for the task of intra-sentence relation extraction. Comparing the results of the present work on iner-sentence relation extraction with previous work on intra-sentence relation extraction, the study suggests the need for more sophisticated models to handle long-range information between entities across sentences.",
"title": ""
},
{
"docid": "688848d25ef154a797f85e03987b795f",
"text": "In this paper, we propose an omnidirectional mobile mechanism with surface contact. This mechanism is expected to perform on rough terrain and weak ground at disaster sites. In the discussion on the drive mechanism, we explain how a two axes orthogonal drive transmission system is important and we propose a principle drive mechanism for omnidirectional motion. In addition, we demonstrated that the proposed drive mechanism has potential for omnidirectional movement on rough ground by conducting experiments with prototypes.",
"title": ""
}
] |
scidocsrr
|
5fc2ffd04afe6ed7ec6d7e687a518403
|
New multi-stage similarity measure for calculation of pairwise patent similarity in a patent citation network
|
[
{
"docid": "09c9a0990946fd884df70d4eeab46ecc",
"text": "Studies of technological change constitute a field of growing importance and sophistication. In this paper we contribute to the discussion with a methodological reflection and application of multi-stage patent citation analysis for the mea surement of inventive progress. Investigating specific patterns of patent citation data, we conclude that single-stage citation analysis cannot reveal technological paths or linea ges. Therefore, one should also make use of indirect citations and bibliographical coupling. To measure aspects of cumulative inventive progress, we develop a “shared specialization measu r ” of patent families. We relate this measure to an expert rating of the technological va lue dded in the field of variable valve actuation for internal combustion engines. In sum, the study presents promising evidence for multi-stage patent citation analysis in order to ex plain aspects of technological change. JEL classification: O31",
"title": ""
}
] |
[
{
"docid": "8da8ecae2ae9f49135dd3480992069f0",
"text": "In this paper, we investigate the use of decentralized blockchain mechanisms for delivering transparent, secure, reliable, and timely energy flexibility, under the form of adaptation of energy demand profiles of Distributed Energy Prosumers, to all the stakeholders involved in the flexibility markets (Distribution System Operators primarily, retailers, aggregators, etc.). In our approach, a blockchain based distributed ledger stores in a tamper proof manner the energy prosumption information collected from Internet of Things smart metering devices, while self-enforcing smart contracts programmatically define the expected energy flexibility at the level of each prosumer, the associated rewards or penalties, and the rules for balancing the energy demand with the energy production at grid level. Consensus based validation will be used for demand response programs validation and to activate the appropriate financial settlement for the flexibility providers. The approach was validated using a prototype implemented in an Ethereum platform using energy consumption and production traces of several buildings from literature data sets. The results show that our blockchain based distributed demand side management can be used for matching energy demand and production at smart grid level, the demand response signal being followed with high accuracy, while the amount of energy flexibility needed for convergence is reduced.",
"title": ""
},
{
"docid": "d8c6ad404d8d8c69f9f6bd28911a0937",
"text": "A hybrid hydrologic estimation model is presented with the aim of performing accurate river flow forecasts without the need of using prior knowledge from the experts in the field. The problem of predicting stream flows is a non-trivial task because the various physical mechanisms governing the river flow dynamics act on a wide range of temporal and spatial scales and almost all the mechanisms involved in the river flow process present some degree of nonlinearity. The proposed system incorporates both statistical and artificial intelligence techniques used at different stages of the reasoning cycle in order to calculate the mean daily water volume forecast of the Salvajina reservoir inflow located at the Department of Cauca, Colombia. The accuracy of the proposed model is compared against other well-known artificial intelligence techniques and several statistical tools previously applied in time series forecasting. The results obtained from the experiments carried out using real data from years 1950 to 2006 demonstrate the superiority of the hybrid system. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "71a9394d995cefb8027bed3c56ec176c",
"text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%",
"title": ""
},
{
"docid": "3a322129019eed67686018404366fe0b",
"text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.",
"title": ""
},
{
"docid": "404a6a58adbd5277e10486d924af8795",
"text": "Data centers (DCs), owing to the exponential growth of Internet services, have emerged as an irreplaceable and crucial infrastructure to power this ever-growing trend. A DC typically houses a large number of computing and storage nodes, interconnected by a specially designed network, namely, DC network (DCN). The DCN serves as a communication backbone and plays a pivotal role in optimizing DC operations. However, compared to the traditional network, the unique requirements in the DCN, for example, large scale, vast application diversity, high power density, and high reliability, pose significant challenges to its infrastructure and operations. We have observed from the premium publication venues (e.g., journals and system conferences) that increasing research efforts are being devoted to optimize the design and operations of the DCN. In this paper, we aim to present a systematic taxonomy and survey of recent research efforts on the DCN. Specifically, we propose to classify these research efforts into two areas: 1) DCN infrastructure and 2) DCN operations. For the former aspect, we review and compare the list of transmission technologies and network topologies used or proposed in the DCN infrastructure. For the latter aspect, we summarize the existing traffic control techniques in the DCN operations, and survey optimization methods to achieve diverse operational objectives, including high network utilization, fair bandwidth sharing, low service latency, low energy consumption, high resiliency, and etc., for efficient DC operations. We finally conclude this survey by envisioning a few open research opportunities in DCN infrastructure and operations.",
"title": ""
},
{
"docid": "0e144e826ab88464c9e8166b84b483b8",
"text": "Video-on-demand streaming services have gained popularity over the past few years. An increase in the speed of the access networks has also led to a larger number of users watching videos online. Online video streaming traffic is estimated to further increase from the current value of 57% to 69% by 2017, Cisco, 2014. In order to retain the existing users and attract new users, service providers attempt to satisfy the user's expectations and provide a satisfactory viewing experience. The first step toward providing a satisfactory service is to be able to quantify the users' perception of the current service level. Quality of experience (QoE) is a quality metric that provides a holistic measure of the users' perception of the quality. In this survey, we first present a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; finally, we present a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict the QoE of online video streaming services.",
"title": ""
},
{
"docid": "cf61f1ecc010e5c021ebbfcf5cbfecf6",
"text": "Arachidonic acid plays a central role in a biological control system where such oxygenated derivatives as prostaglandins, thromboxanes, and leukotrienes are mediators. The leukotrienes are formed by transformation of arachidonic acid into an unstable epoxide intermediate, leukotriene A4, which can be converted enzymatically by hydration to leukotriene B4, and by addition of glutathione to leukotriene C4. This last compound is metabolized to leukotrienes D4 and E4 by successive elimination of a gamma-glutamyl residue and glycine. Slow-reacting substance of anaphylaxis consists of leukotrienes C4, D4, and E4. The cysteinyl-containing leukotrienes are potent bronchoconstrictors, increase vascular permeability in postcapillary venules, and stimulate mucus secretion. Leukotriene B4 causes adhesion and chemotactic movement of leukocytes and stimulates aggregation, enzyme release, and generation of superoxide in neutrophils. Leukotrienes C4, D4, and E4, which are released from the lung tissue of asthmatic subjects exposed to specific allergens, seem to play a pathophysiological role in immediate hypersensitivity reactions. These leukotrienes, as well as leukotriene B4, have pro-inflammatory effects.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "2f1862591d5f9ee80d7cdcb930f86d8d",
"text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.",
"title": ""
},
{
"docid": "4cb475f264a8773dc502c9bfdd7b260c",
"text": "Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator Grasplt!.",
"title": ""
},
{
"docid": "f91ba9074d4c4883e4ef6672cd696247",
"text": "Contemporary benchmark methods for image inpainting are based on deep generative models and specifically leverage adversarial loss for yielding realistic reconstructions. However, these models cannot be directly applied on image/video sequences because of an intrinsic drawbackthe reconstructions might be independently realistic, but, when visualized as a sequence, often lacks fidelity to the original uncorrupted sequence. The fundamental reason is that these methods try to find the best matching latent space representation near to natural image manifold without any explicit distance based loss. In this paper, we present a semantically conditioned Generative Adversarial Network (GAN) for sequence inpainting. The conditional information constrains the GAN to map a latent representation to a point in image manifold respecting the underlying pose and semantics of the scene. To the best of our knowledge, this is the first work which simultaneously addresses consistency and correctness of generative model based inpainting. We show that our generative model learns to disentangle pose and appearance information; this independence is exploited by our model to generate highly consistent reconstructions. The conditional information also aids the generator network in GAN to produce sharper images compared to the original GAN formulation. This helps in achieving more appealing inpainting performance. Though generic, our algorithm was targeted for inpainting on faces. When applied on CelebA and Youtube Faces datasets, the proposed method results in significant improvement over the current benchmark, both in terms of quantitative evaluation (Peak Signal to Noise Ratio) and human visual scoring over diversified combinations of resolutions and deformations. Figure 1. Exemplary success of our model in simultaneously preserving facial semantics(appearance and expressions) and improving inpaiting quality. Benchmark generative models such as DIP [49] are agnostic to holistic facial semantics and thus generate independently realistic, yet structurally inconsistent solutions.",
"title": ""
},
{
"docid": "ae57246e37060c8338ad9894a19f1b6b",
"text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.",
"title": ""
},
{
"docid": "1d195fb4df8375772674d0852a046548",
"text": "All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems). The loss of image details due to A/D quantization is complete and it cannot be recovered by traditional image processing methods, but the modern data-driven machine learning approach offers a much needed cure to the problem. In this work we propose a novel approach to restore and enhance images acquired in low and uneven lighting. First, the ill illumination is algorithmically compensated by emulating the effects of artificial supplementary lighting. Then a DCNN trained using only synthetic data recovers the missing detail caused by quantization.",
"title": ""
},
{
"docid": "1cf029e7284359e3cdbc12118a6d4bf5",
"text": "Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle-filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsification in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance-based methods, and multihypothesis techniques. The third development discussed in this tutorial is the trend towards richer appearance-based models of landmarks and maps. While initially motivated by problems in data association and loop closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. The environment representation section surveys current developments in this area along a number of lines, including delayed mapping, the use of nongeometric landmarks, and trajectory estimation methods. SLAM methods have now reached a state of considerable maturity. Future challenges will center on methods enabling large-scale implementations in increasingly unstructured environments and especially in situations where GPS-like solutions are unavailable or unreliable: in urban canyons, under foliage, under water, or on remote planets.",
"title": ""
},
{
"docid": "e493bbcf5f2b561757ca795ab6bb1099",
"text": "As a spatio-temporal data-management problem, taxi ridesharing has received a lot of attention recently in the database literature. The broader scientific community, and the commercial world have also addressed the issue through services such as UberPool and Lyftline. The issues addressed have been efficient matching of passengers and taxis, fares, and savings from ridesharing. However, ridesharing fairness has not been addressed so far. Ridesharing fairness is a new problem that we formally define in this paper. We also propose a method of combining the benefits of fair and optimal ridesharing, and of efficiently executing fair and optimal ridesharing queries.",
"title": ""
},
{
"docid": "d9b3f5613a93fcaf1fee35c1c5effee2",
"text": "The socio-economic condition & various health hazards are the main suffering at old age. To combat this situation, we have tried to develop and fabricate one wearable electronic rescue system for elderly especially when he is at home alone. The system can detect abnormal condition of heart as well as sudden accidental fall at home. The system has been developed using Arduino Microcontroller and GSM modem. The entire program and evaluation has been developed under LabView platform. The prototype was built and trialed successfully.",
"title": ""
},
{
"docid": "6d323f8dbfd7d2883a4926b80097727c",
"text": "This work presents a novel geospatial mapping service, based on OpenStreetMap, which has been designed and developed in order to provide personalized path to users with special needs. This system gathers data related to barriers and facilities of the urban environment via crowd sourcing and sensing done by users. It also considers open data provided by bus operating companies to identify the actual accessibility feature and the real time of arrival at the stops of the buses. The resulting service supports citizens with reduced mobility (users with disabilities and/or elderly people) suggesting urban paths accessible to them and providing information related to travelling time, which are tailored to their abilities to move and to the bus arrival time. The manuscript demonstrates the effectiveness of the approach by means of a case study focusing on the differences between the solutions provided by our system and the ones computed by main stream geospatial mapping services.",
"title": ""
},
{
"docid": "ddb51863430250a28f37c5f12c13c910",
"text": "Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, “contextuality,” is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, “quantum entanglement,” allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.",
"title": ""
},
{
"docid": "a7226ab0968d252bad65931bcc0bc089",
"text": "The coupling of renewable energy and hydrogen technologies represents in the mid-term a very interesting way to match the tasks of increasing the reliable exploitation of wind and sea wave energy and introducing clean technologies in the transportation sector. This paper presents two different feasibility studies: the first proposes two plants based on wind and sea wave resource for the production, storage and distribution of hydrogen for public transportation facilities in the West Sicily; the second applies the same approach to Pantelleria (a smaller island), including also some indications about solar resource. In both cases, all buses will be equipped with fuel-cells. A first economic analysis is presented together with the assessment of the avoidable greenhouse gas emissions during the operation phase. The scenarios addressed permit to correlate the demand of urban transport to renewable resources present in the territories and to the modern technologies available for the production of hydrogen from renewable energies. The study focuses on the possibility of tapping the renewable energy potential (wind and sea wave) for the hydrogen production by electrolysis. The use of hydrogen would significantly reduce emissions of particulate matter and greenhouse gases in urban districts under analysis. The procedures applied in the present article, as well as the main equations used, are the result of previous applications made in different technical fields that show a good replicability.",
"title": ""
},
{
"docid": "c5639c65908882291c29e147605c79ca",
"text": "Dirofilariasis is a rare disease in humans. We report here a case of a 48-year-old male who was diagnosed with pulmonary dirofilariasis in Korea. On chest radiographs, a coin lesion of 1 cm in diameter was shown. Although it looked like a benign inflammatory nodule, malignancy could not be excluded. So, the nodule was resected by video-assisted thoracic surgery. Pathologically, chronic granulomatous inflammation composed of coagulation necrosis with rim of fibrous tissues and granulations was seen. In the center of the necrotic nodules, a degenerating parasitic organism was found. The parasite had prominent internal cuticular ridges and thick cuticle, a well-developed muscle layer, an intestinal tube, and uterine tubules. The parasite was diagnosed as an immature female worm of Dirofilaria immitis. This is the second reported case of human pulmonary dirofilariasis in Korea.",
"title": ""
}
] |
scidocsrr
|
3a3f699a6eddedfeda60e09c59854499
|
ECG Beats Classification Using Mixture of Features
|
[
{
"docid": "45be193fe04064886615367dd9225c92",
"text": "Automatic electrocardiogram (ECG) beat classification is essential to timely diagnosis of dangerous heart conditions. Specifically, accurate detection of premature ventricular contractions (PVCs) is imperative to prepare for the possible onset of life-threatening arrhythmias. Although many groups have developed highly accurate algorithms for detecting PVC beats, results have generally been limited to relatively small data sets. Additionally, many of the highest classification accuracies (>90%) have been achieved in experiments where training and testing sets overlapped significantly. Expanding the overall data set greatly reduces overall accuracy due to significant variation in ECG morphology among different patients. As a result, we believe that morphological information must be coupled with timing information, which is more constant among patients, in order to achieve high classification accuracy for larger data sets. With this approach, we combined wavelet-transformed ECG waves with timing information as our feature set for classification. We used select waveforms of 18 files of the MIT/BIH arrhythmia database, which provides an annotated collection of normal and arrhythmic beats, for training our neural-network classifier. We then tested the classifier on these 18 training files as well as 22 other files from the database. The accuracy was 95.16% over 93,281 beats from all 40 files, and 96.82% over the 22 files outside the training set in differentiating normal, PVC, and other beats",
"title": ""
}
] |
[
{
"docid": "801a197f630189ab0a9b79d3cbfe904b",
"text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.",
"title": ""
},
{
"docid": "a9201c32c903eba5cc25a744134a1c3c",
"text": "This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "a9cfb59c0187466d64010a3f39ac0e30",
"text": "Model-free Reinforcement Learning (RL) offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often necessitates training in simulated environments. Even in simulation, goal-directed tasks whose natural reward function is sparse remain intractable for state-of-the-art model-free algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum for a model-free policy optimization algorithm. Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any model-free RL algorithm on a broad class of goal-directed continuous control MDPs. Its curriculum strategy is physically intuitive, easy-to-tune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the model-free RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naı̈ve exploration strategies.",
"title": ""
},
{
"docid": "e89891b0f04902d01468fa0e2e44f9ac",
"text": "It is a general assumption that pneumatic muscle-type actuators will play an important role in the development of an assistive rehabilitation robotics system. In the last decade, the development of a pneumatic muscle actuated lower-limb leg orthosis has been rather slow compared to other types of actuated leg orthoses that use AC motors, DC motors, pneumatic cylinders, linear actuators, series elastic actuators (SEA) and brushless servomotors. However, recent years have shown that the interest in this field has grown exponentially, mainly due to the demand for a more compliant and interactive human-robotics system. This paper presents a survey of existing lower-limb leg orthoses for rehabilitation, which implement pneumatic muscle-type actuators, such as McKibben artificial muscles, rubbertuators, air muscles, pneumatic artificial muscles (PAM) or pneumatic muscle actuators (PMA). It reviews all the currently existing lower-limb rehabilitation orthosis systems in terms of comparison and evaluation of the design, as well as the control scheme and strategy, with the aim of clarifying the current and on-going research in the lower-limb robotic rehabilitation field.",
"title": ""
},
{
"docid": "5ac2930a623b542cf8ebbea6314c5ef1",
"text": "BACKGROUND\nTelomerase continues to generate substantial attention both because of its pivotal roles in cellular proliferation and aging and because of its unusual structure and mechanism. By replenishing telomeric DNA lost during the cell cycle, telomerase overcomes one of the many hurdles facing cellular immortalization. Functionally, telomerase is a reverse transcriptase, and it shares structural and mechanistic features with this class of nucleotide polymerases. Telomerase is a very unusual reverse transcriptase because it remains stably associated with its template and because it reverse transcribes multiple copies of its template onto a single primer in one reaction cycle.\n\n\nSCOPE OF REVIEW\nHere, we review recent findings that illuminate our understanding of telomerase. Even though the specific emphasis is on structure and mechanism, we also highlight new insights into the roles of telomerase in human biology.\n\n\nGENERAL SIGNIFICANCE\nRecent advances in the structural biology of telomerase, including high resolution structures of the catalytic subunit of a beetle telomerase and two domains of a ciliate telomerase catalytic subunit, provide new perspectives into telomerase biochemistry and reveal new puzzles.",
"title": ""
},
{
"docid": "c4ee2810b5a799a16e2ea66073719050",
"text": "Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.",
"title": ""
},
{
"docid": "21c15eb5420a7345cc2900f076b15ca1",
"text": "Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.",
"title": ""
},
{
"docid": "1d5336ce334476a45503e7b73ec025f2",
"text": "The science of complexity is based on a new way of thinking that stands in sharp contrast to the philosophy underlying Newtonian science, which is based on reductionism, determinism, and objective knowledge. This paper reviews the historical development of this new world view, focusing on its philosophical foundations. Determinism was challenged by quantum mechanics and chaos theory. Systems theory replaced reductionism by a scientifically based holism. Cybernetics and postmodern social science showed that knowledge is intrinsically subjective. These developments are being integrated under the header of “complexity science”. Its central paradigm is the multi-agent system. Agents are intrinsically subjective and uncertain about their environment and future, but out of their local interactions, a global organization emerges. Although different philosophers, and in particular the postmodernists, have voiced similar ideas, the paradigm of complexity still needs to be fully assimilated by philosophy. This will throw a new light on old philosophical issues such as relativism, ethics and the role of the subject.",
"title": ""
},
{
"docid": "cf0f9a3d57ace2a9dbd65ac09b08d3e5",
"text": "Prosodic modeling is a core problem in speech synthesis. The key challenge is producing desirable prosody from textual input containing only phonetic information. In this preliminary study, we introduce the concept of “style tokens” in Tacotron, a recently proposed end-to-end neural speech synthesis model. Using style tokens, we aim to extract independent prosodic styles from training data. We show that without annotation data or an explicit supervision signal, our approach can automatically learn a variety of prosodic variations in a purely data-driven way. Importantly, each style token corresponds to a fixed style factor regardless of the given text sequence. As a result, we can control the prosodic style of synthetic speech in a somewhat predictable and globally consistent way.",
"title": ""
},
{
"docid": "57dfc6f8b462512a3a2328f897ea44a6",
"text": "We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.",
"title": ""
},
{
"docid": "b24babd50bd6c7592e272f387e89953a",
"text": "Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases. Previous sentence level denoise models don’t achieve satisfying performances because they use hard labels which are determined by distant supervision and immutable during training. To this end, we introduce an entity-pair level denoise method which exploits semantic information from correctly labeled entity pairs to correct wrong labels dynamically during training. We propose a joint score function which combines the relational scores based on the entity-pair representation and the confidence of the hard label to obtain a new label, namely a soft label, for certain entity pair. During training, soft labels instead of hard labels serve as gold labels. Experiments on the benchmark dataset show that our method dramatically reduces noisy instances and outperforms the state-of-the-art systems.",
"title": ""
},
{
"docid": "02cd879a83070af9842999c7215e7f92",
"text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.",
"title": ""
},
{
"docid": "defde14c64f5eecda83cf2a59c896bc0",
"text": "Time series shapelets are discriminative subsequences and their similarity to a time series can be used for time series classification. Since the discovery of time series shapelets is costly in terms of time, the applicability on long or multivariate time series is difficult. In this work we propose Ultra-Fast Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast Shapelets yield the same prediction quality as current state-of-theart shapelet-based time series classifiers that carefully select the shapelets by being by up to three orders of magnitudes. Since this method allows a ultra-fast shapelet discovery, using shapelets for long multivariate time series classification becomes feasible. A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains. Finally, time series derivatives that have proven to be useful for other time series classifiers are investigated for the shapelet-based classifiers. It is shown that they have a positive impact and that they are easy to integrate with a simple preprocessing step, without the need of adapting the shapelet discovery algorithm.",
"title": ""
},
{
"docid": "8481bf05a0afc1de516d951474fb9d92",
"text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.",
"title": ""
},
{
"docid": "6c08b9488b5f5c7e4b91d2b8941a9ced",
"text": "Modern affiliate marketing networks provide an infrastructure for connecting merchants seeking customers with independent marketers (affiliates) seeking compensation. This approach depends on Web cookies to identify, at checkout time, which affiliate should receive a commission. Thus, scammers ``stuff'' their own cookies into a user's browser to divert this revenue. This paper provides a measurement-based characterization of cookie-stuffing fraud in online affiliate marketing. We use a custom-built Chrome extension, AffTracker, to identify affiliate cookies and use it to gather data from hundreds of thousands of crawled domains which we expect to be targeted by fraudulent affiliates. Overall, despite some notable historical precedents, we found cookie-stuffing fraud to be relatively scarce in our data set. Based on what fraud we detected, though, we identify which categories of merchants are most targeted and which third-party affiliate networks are most implicated in stuffing scams. We find that large affiliate networks are targeted significantly more than merchant-run affiliate programs. However, scammers use a wider range of evasive techniques to target merchant-run affiliate programs to mitigate the risk of detection suggesting that in-house affiliate programs enjoy stricter policing.",
"title": ""
},
{
"docid": "b5238bfae025d46647526229dd5e00dd",
"text": "Influences of discharge voltage on wheat seed vitality were investigated in a dielectric barrier discharge (DBD) plasma system at atmospheric pressure and temperature. Six different treatments were designed, and their discharge voltages were 0.0, 9.0, 11.0, 13.0, 15.0, and 17.0 kV, respectively. Fifty seeds were exposed to the DBD plasma atmosphere with an air flow rate of 1.5 L min-1 for 4 min in each treatment, and then the DBD plasma-treated seeds were prepared for germination in several Petri dishes. Each treatment was repeated three times. Germination indexes, growth indexes, surface topography, water uptake, permeability, and α-amylase activity were measured. DBD plasma treatment at appropriate energy levels had positive effects on wheat seed germination and seedling growth. The germination potential, germination index, and vigor index significantly increased by 31.4%, 13.9%, and 54.6% after DBD treatment at 11.0 kV, respectively, in comparison to the control. Shoot length, root length, dry weight, and fresh weight also significantly increased after the DBD plasma treatment. The seed coat was softened and cracks were observed, systematization of the protein was strengthened, and amount of free starch grain increased after the DBD plasma treatment. Water uptake, relative electroconductivity, soluble protein, and α-amylase activity of the wheat seed were also significantly improved after the DBD plasma treatment. Roles of active species and ultraviolet radiation generated in the DBD plasma process in wheat seed germination and seedling growth are proposed. Bioelectromagnetics. 39:120-131, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "4ec7480aeb1b3193d760d554643a1660",
"text": "The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent’s deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a ‘what’ and ‘where’ neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.",
"title": ""
},
{
"docid": "7490e0039b8060ec1a4c27405a20a513",
"text": "Trajectories obtained from GPS-enabled taxis grant us an opportunity to not only extract meaningful statistics, dynamics and behaviors about certain urban road users, but also to monitor adverse and/or malicious events. In this paper we focus on the problem of detecting anomalous routes by comparing against historically “normal” routes. We propose a real-time method, iBOAT, that is able to detect anomalous trajectories “on-the-fly”, as well as identify which parts of the trajectory are responsible for its anomalousness. We evaluate our method on a large dataset of taxi GPS logs and verify that it has excellent accuracy (AUC ≥ 0.99) and overcomes many of the shortcomings of other state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
6e76f748c9d8c6b4bc9e46a13dee51fd
|
A critical evaluation of the emotional intelligence construct
|
[
{
"docid": "346bedcddf74d56db8b2d5e8b565efef",
"text": "Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida",
"title": ""
}
] |
[
{
"docid": "d94f4df63ac621d9a8dec1c22b720abb",
"text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "505b1fc76ef4e3fa6b0d5101e3dfd4fb",
"text": "In this work the problem of guided improvisation is approached and elaborated; then a new method, Variable Markov Oracle, for guided music synthesis is proposed as the first step to tackle the guided improvisation problem. Variable Markov Oracle is based on previous results from Audio Oracle, which is a fast indexing and recombination method of repeating sub-clips in an audio signal. The newly proposed Variable Markov Oracle is capable of identifying inherent datapoint clusters in an audio signal while tracking the sequential relations among clusters at the same time. With a target audio signal indexed by Variable Markov Oracle, a query-matching algorithm is devised to synthesize new music materials by recombination of the target audio matched to a query audio. This approach makes the query-matching algorithm a solution to the guided music synthesis problem. The query-matching algorithm is efficient and intelligent since it follows the inherent clusters discovered by Variable Markov Oracle, creating a query-by-content result which allows numerous applications in concatenative synthesis, machine improvisation and interactive music system. Examples of using Variable Markov Oracle to synthesize new musical materials based on given music signals in the style of",
"title": ""
},
{
"docid": "0e56ef5556c34274de7d7dceff17317e",
"text": "We investigate grounded sentence representations, where we train a sentence encoder to predict the image features of a given caption— i.e., we try to “imagine” how a sentence would be depicted visually—and use the resultant features as sentence representations. We examine the quality of the learned representations on a variety of standard sentence representation quality benchmarks, showing improved performance for groundedmodels over non-grounded ones. In addition, we thoroughly analyze the extent to which grounding contributes to improved performance, and show that the system also learns improved word embeddings.",
"title": ""
},
{
"docid": "d6c490c24aaa6f3798f31e713441ef72",
"text": "High-level synthesis (HLS) has been gaining traction recently as a design methodology for FPGAs, with the promise of raising the productivity of FPGA hardware designers, and ultimately, opening the door to the use of FPGAs as computing devices targetable by software engineers. In this tutorial, we introduce LegUp, an open-source HLS tool for FPGAs developed at the University of Toronto. With LegUp, a user can compile a C program completely to hardware, or alternately, he/she can choose to compile the program to a hybrid hardware/software system comprising a processor along with one or more accelerators. LegUp supports the synthesis of most of the C language to hardware, including loops, structs, multi-dimensional arrays, pointer arithmetic, and floating point operations. The LegUp distribution includes the CHStone HLS benchmark suite, as well as a test suite and associated infrastructure for measuring quality of results, and for verifying the functionality of LegUp-generated circuits. LegUp is freely downloadable at www.legup.org, providing a powerful platform that can be leveraged for new high-level synthesis research.",
"title": ""
},
{
"docid": "382fd1b9fca8163718548522ce05c58d",
"text": "Software development involves a number of interrelated factors which affect development effort and productivity. Since many of these relationships are not well understood, accurate estimation of so&are development time and effort is a dificult problem. Most estimation models in use or proposed in the literature are based on regression techniques. This paper examines the potential of two artijcial intelligence approaches i.e. artificial neural network and case-based reasoning for creating development effort estimation models. Artijcial neural network can provide accurate estimates when there are complex relationships between variables and where the input data is distorted by high noise levels Case-based reasoning solves problems by adapting solutions from old problems similar to the current problem. This research examines both the performance of back-propagation artificial neural networks in estimating software development effort and the potential of case-based reasoning for development estimation using the same dataset.",
"title": ""
},
{
"docid": "32764726652b5f95aa2d208f80e967c0",
"text": "Simulation is a technique-not a technology-to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner. The diverse applications of simulation in healthcare can be categorized by 11 dimensions: aims and purposes of the simulation activity; unit of participation; experience level of participants; healthcare domain; professional discipline of participants; type of knowledge, skill, attitudes, or behaviors addressed; the simulated patient's age; technology applicable or required; site of simulation; extent of direct participation; and method of feedback used. Using simulation to improve safety will require full integration of its applications into the routine structures and practices of healthcare. The costs and benefits of simulation are difficult to determine, especially for the most challenging applications, where long-term use may be required. Various driving forces and implementation mechanisms can be expected to propel simulation forward, including professional societies, liability insurers, healthcare payers, and ultimately the public. The future of simulation in healthcare depends on the commitment and ingenuity of the healthcare simulation community to see that improved patient safety using this tool becomes a reality.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "764eba2c2763db6dce6c87170e06d0f8",
"text": "Kansei Engineering was developed as a consumer-oriented technology for new product development. It is defined as \"translating technology of a consumer's feeling and image for a product into design elements\". Kansei Engineering (KE) technology is classified into three types, KE Type I, II, and III. KE Type I is a category classification on the new product toward the design elements. Type II utilizes the current computer technologies such as Expert System, Neural Network Model and Genetic Algorithm. Type III is a model using a mathematical structure. Kansei Engineering has permeated Japanese industries, including automotive, electrical appliance, construction, clothing and so forth. The successful companies using Kansei Engineering benefited from good sales regarding the new consumer-oriented products. Relevance to industry Kansei Engineering is utilized in the automotive, electrical appliance, construction, clothing and other industries. This paper provides help to potential product designers in these industries.",
"title": ""
},
{
"docid": "779d75beb7ea4967f9503d6c4d087a5d",
"text": "BACKGROUND\nTeaching is considered a highly stressful occupation. Burnout is a negative affective response occurring as a result of chronic work stress. While the early theories of burnout focused exclusively on work-related stressors, recent research adopts a more integrative approach where both environmental and individual factors are studied. Nevertheless, such studies are scarce with teacher samples.\n\n\nAIMS\nThe present cross-sectional study sought to investigate the association between burnout, personality characteristics and job stressors in primary school teachers from Cyprus. The study also investigates the relative contribution of these variables on the three facets of burnout - emotional exhaustion, depersonalization and reduced personal accomplishment.\n\n\nSAMPLE\nA representative sample of 447 primary school teachers participated in the study.\n\n\nMETHOD\nTeachers completed measures of burnout, personality and job stressors along with demographic and professional data. Surveys were delivered by courier to schools, and were distributed at faculty meetings.\n\n\nRESULTS\nResults showed that both personality and work-related stressors were associated with burnout dimensions. Neuroticism was a common predictor of all dimensions of burnout although in personal accomplishment had a different direction. Managing student misbehaviour and time constraints were found to systematically predict dimensions of burnout.\n\n\nCONCLUSIONS\nTeachers' individual characteristics as well as job related stressors should be taken into consideration when studying the burnout phenomenon. The fact that each dimension of the syndrome is predicted by different variables should not remain unnoticed especially when designing and implementing intervention programmes to reduce burnout in teachers.",
"title": ""
},
{
"docid": "e1afaed983932bc98c5b0b057d4b5ab6",
"text": "This paper presents a novel solution for the problem of building text classifier using positive documents (P) and unlabeled documents (U). Here, the unlabeled documents are mixed with positive and negative documents. This problem is also called PU-Learning. The key feature of PU-Learning is that there is no negative document for training. Recently, several approaches have been proposed for solving this problem. Most of them are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. Generally speaking, these existing approaches do not perform well when the size of P is small. In this paper, we propose a new approach aiming at improving the system when the size of P is small. This approach combines the graph-based semi-supervised learning method with the two-step method. Experiments indicate that our proposed method performs well especially when the size of P is small.",
"title": ""
},
{
"docid": "9e3057c25630bfdf5e7ebcc53b6995b0",
"text": "We present a new solution to the ``ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"title": ""
},
{
"docid": "e20d26ce3dea369ae6817139ff243355",
"text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.",
"title": ""
},
{
"docid": "ec1da767db4247990c26f97483f1b9e1",
"text": "We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected by directed, labelled edges, and property graphs, where nodes and edges can further have attributes. Next we discuss the two most fundamental graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter, we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed and how they can be combined with graph patterns. We also discuss several semantics under which queries using the previous features can be evaluated, what effects the selection of features and semantics has on complexity, and offer examples of such features in three modern languages that are used to query graphs: SPARQL, Cypher, and Gremlin. We conclude by discussing the importance of formalisation for graph query languages; a summary of what is known about SPARQL, Cypher, and Gremlin in terms of expressivity and complexity; and an outline of possible future directions for the area.",
"title": ""
},
{
"docid": "3b26a62ec701f34c9876bd93c494412d",
"text": "Emotions affect many aspects of our daily lives including decision making, reasoning and physical wellbeing. Researchers have therefore addressed the detection of emotion from individuals' heart rate, skin conductance, pupil dilation, tone of voice, facial expression and electroencephalogram (EEG). This paper presents an algorithm for classifying positive and negative emotions from EEG. Unlike other algorithms that extract fuzzy rules from the data, the fuzzy rules used in this paper are obtained from emotion classification research reported in the literature and the classification output indicates both the type of emotion and its strength. The results show that the algorithm is more than 90 times faster than the widely used LIBSVM and the obtained average accuracy of 63.52 % is higher than previously reported using the same EEG dataset. This makes this algorithm attractive for real time emotion classification. In addition, the paper introduces a new oscillation feature computed from local minima and local maxima of the signal.",
"title": ""
},
{
"docid": "a227cdfba497e6e6d356e50fa5d90afc",
"text": "SUMMARY\nThe Biopython project is a mature open source international collaboration of volunteer developers, providing Python libraries for a wide range of bioinformatics problems. Biopython includes modules for reading and writing different sequence file formats and multiple sequence alignments, dealing with 3D macro molecular structures, interacting with common tools such as BLAST, ClustalW and EMBOSS, accessing key online databases, as well as providing numerical methods for statistical learning.\n\n\nAVAILABILITY\nBiopython is freely available, with documentation and source code at (www.biopython.org) under the Biopython license.",
"title": ""
},
{
"docid": "2959b7da07ce8b0e6825819566bce9ab",
"text": "Social isolation among the elderly is a concern in developed countries. Using a randomized trial, this study examined the effect of a social isolation prevention program on loneliness, depression, and subjective well-being of the elderly in Japan. Among the elderly people who relocated to suburban Tokyo, 63 who responded to a pre-test were randomized and assessed 1 and 6 months after the program. Four sessions of a group-based program were designed to prevent social isolation by improving community knowledge and networking with other participants and community \"gatekeepers.\" The Life Satisfaction Index A (LSI-A), Geriatric Depression Scale (GDS), Ando-Osada-Kodama (AOK) loneliness scale, social support, and other variables were used as outcomes of this study. A linear mixed model was used to compare 20 of the 21 people in the intervention group to 40 of the 42 in the control group, and showed that the intervention program had a significant positive effect on LSI-A, social support, and familiarity with services scores and a significant negative effect on AOK over the study period. The program had no significant effect on depression. The findings of this study suggest that programs aimed at preventing social isolation are effective when they utilize existing community resources, are tailor-made based on the specific needs of the individual, and target people who can share similar experiences.",
"title": ""
},
{
"docid": "1168c9e6ce258851b15b7e689f60e218",
"text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).",
"title": ""
},
{
"docid": "625002b73c5e386989ddd243a71a1b56",
"text": "AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student's typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student's questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.",
"title": ""
}
] |
scidocsrr
|
eedac8237e141a6be08a60687507900e
|
Machine vision: a survey
|
[
{
"docid": "4a5cfc32cccc96c49739cc49f311ddb4",
"text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.",
"title": ""
}
] |
[
{
"docid": "fdcea57edbe935ec9949247fd47888e6",
"text": "Maintenance of skeletal muscle mass is contingent upon the dynamic equilibrium (fasted losses-fed gains) in protein turnover. Of all nutrients, the single amino acid leucine (Leu) possesses the most marked anabolic characteristics in acting as a trigger element for the initiation of protein synthesis. While the mechanisms by which Leu is 'sensed' have been the subject of great scrutiny, as a branched-chain amino acid, Leu can be catabolized within muscle, thus posing the possibility that metabolites of Leu could be involved in mediating the anabolic effect(s) of Leu. Our objective was to measure muscle protein anabolism in response to Leu and its metabolite HMB. Using [1,2-(13)C2]Leu and [(2)H5]phenylalanine tracers, and GC-MS/GC-C-IRMS we studied the effect of HMB or Leu alone on MPS (by tracer incorporation into myofibrils), and for HMB we also measured muscle proteolysis (by arteriovenous (A-V) dilution). Orally consumed 3.42 g free-acid (FA-HMB) HMB (providing 2.42 g of pure HMB) exhibited rapid bioavailability in plasma and muscle and, similarly to 3.42 g Leu, stimulated muscle protein synthesis (MPS; HMB +70% vs. Leu +110%). While HMB and Leu both increased anabolic signalling (mechanistic target of rapamycin; mTOR), this was more pronounced with Leu (i.e. p70S6K1 signalling 90 min vs. 30 min for HMB). HMB consumption also attenuated muscle protein breakdown (MPB; -57%) in an insulin-independent manner. We conclude that exogenous HMB induces acute muscle anabolism (increased MPS and reduced MPB) albeit perhaps via distinct, and/or additional mechanism(s) to Leu.",
"title": ""
},
{
"docid": "b6e6784d18c596565ca1e4d881398a0d",
"text": "Uncovering lies (or deception) is of critical importance to many including law enforcement and security personnel. Though these people may try to use many different tactics to discover deception, previous research tells us that this cannot be accomplished successfully without aid. This manuscript reports on the promising results of a research study where data and text mining methods along with a sample of real-world data from a high-stakes situation is used to detect deception. At the end, the information fusion based classification models produced better than 74% classification accuracy on the holdout sample using a 10-fold cross validation methodology. Nonetheless, artificial neural networks and decision trees produced accuracy rates of 73.46% and 71.60% respectively. However, due to the high stakes associated with these types of decisions, the extra effort of combining the models to achieve higher accuracy",
"title": ""
},
{
"docid": "877d7d467711e8cb0fd03a941c7dc9da",
"text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.",
"title": ""
},
{
"docid": "2489fb3b63d40b3f851de5d1b5da4f45",
"text": "HANDEXOS is an exoskeleton device for supporting th e human hand and performing teleoperation activities. It could be used to opera te both in remote-manipulation mode and directly in microgravity environments. In manipulation mode, crew or operators within the space ship could tele-control the endeffector of a robot in the space during the executi on of extravehicular activities (EVA) by means of an advanced upper limb exoskeleton. The ch oice of an appropriate man-machine interface (MMI) is important to allow a correct and dexterous grasp of objects of regular and irregular shapes in the space. Many different t chnologies have been proposed, from conventional joysticks to exoskeletons, but the ari sing number of more and more dexterous space manipulators such as Robonaut [1] or Eurobot [2] leads researchers to design novel MMIs with the aim to be more capable to exploit all functional advantages offered by new space robots. From this point of view exoskeletons better suite for execution of remote control-task than conventional joysticks, facilitat ing commanding of three dimensional trajectories and saving time in crew’s operation a nd training [3]. Moreover, it’s important to point out that in micro gravity environments the astronauts spend most time doing motor exercises, so HANDEXOS can be useful in supporting such motor practice, assisting human operators in overco ming physical limitations deriving from the fatigue in performing EVA. It is the goal of this paper to provide a detailed description of HANDEXOS mechanical design and to present the results of the preliminar y simulations derived from the implementation of the exoskeleton/human finger dyna mic model for different actuation solutions.",
"title": ""
},
{
"docid": "7fd7f6f14e2623695ce3bb99c22db880",
"text": "INTRODUCTION 425 DEFINING PLAY 426 THEORIES OF PLAY 428 Piaget 428 Vygotsky 429 VARIETIES OF PLAY AND THEIR DEVELOPMENTAL COURSE 430 Sensorimotor and Object Play 430 Physical or Locomotor Play 430 Rough-and-Tumble Play 431 Exploratory Play 431 Construction Play 432 Symbolic Play 432 Summary 433 CONTEMPORARY ISSUES IN PLAY RESEARCH 433 Pretend Play and Theory of Mind 433 Symbolic Understanding 439 Object Substitution 441 Distinguishing Pretense From Reality 442 Initiating Pretend Play 446 Does Play Improve Developmental Outcomes? 447 INTERINDIVIDUAL DIFFERENCES IN PLAY 451 Gender Differences in Play 451 The Play of Atypically Developing Children 451 Play Across Cultures 454 FUTURE DIRECTIONS 457 Changing Modes of Play 457 Why Children Pretend 458 Play Across the Life Span 459 CONCLUSION 459 REFERENCES 460",
"title": ""
},
{
"docid": "127d6d93290a1953b8baff45e42858cb",
"text": "Compressing convolutional neural networks (CNNs) is essential for transferring the success of CNNs to a wide variety of applications to mobile devices. In contrast to directly recognizing subtle weights or filters as redundant in a given CNN, this paper presents an evolutionary method to automatically eliminate redundant convolution filters. We represent each compressed network as a binary individual of specific fitness. Then, the population is upgraded at each evolutionary iteration using genetic operations. As a result, an extremely compact CNN is generated using the fittest individual. In this approach, either large or small convolution filters can be redundant, and filters in the compressed network are more distinct. In addition, since the number of filters in each convolutional layer is reduced, the number of filter channels and the size of feature maps are also decreased, naturally improving both the compression and speed-up ratios. Experiments on benchmark deep CNN models suggest the superiority of the proposed algorithm over the state-of-the-art compression methods.",
"title": ""
},
{
"docid": "27b8e6f3781bd4010c92a705ba4d5fcc",
"text": "Maximum power point tracking (MPPT) strategies in photovoltaic (PV) systems ensure efficient utilization of PV arrays. Among different strategies, the perturb and observe (P&O) algorithm has gained wide popularity due to its intuitive nature and simple implementation. However, such simplicity in P&O introduces two inherent issues, namely, an artificial perturbation that creates losses in steady-state operation and a limited ability to track transients in changing environmental conditions. This paper develops and discusses in detail an MPPT algorithm with zero oscillation and slope tracking to address those technical challenges. The strategy combines three techniques to improve steady-state behavior and transient operation: 1) idle operation on the maximum power point (MPP); 2) identification of the irradiance change through a natural perturbation; and 3) a simple multilevel adaptive tracking step. Two key elements, which form the foundation of the proposed solution, are investigated: 1) the suppression of the artificial perturb at the MPP; and 2) the indirect identification of irradiance change through a current-monitoring algorithm, which acts as a natural perturbation. The zero-oscillation adaptive step P&O strategy builds on these mechanisms to identify relevant information and to produce efficiency gains. As a result, the combined techniques achieve superior overall performance while maintaining simplicity of implementation. Simulations and experimental results are provided to validate the proposed strategy, and to illustrate its behavior in steady and transient operations.",
"title": ""
},
{
"docid": "bb5e00ac09e12f3cdb097c8d6cfde9a9",
"text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd",
"title": ""
},
{
"docid": "5cac184d3eb964a51722321096918ffb",
"text": "We propose an effective technique to solving review-level sentiment classification problem by using sentence-level polarity correction. Our polarity correction technique takes into account the consistency of the polarities (positive and negative) of sentences within each product review before performing the actual machine learning task. While sentences with inconsistent polarities are removed, sentences with consistent polarities are used to learn state-of-the-art classifiers. The technique achieved better results on different types of products reviews and outperforms baseline models without the correction technique. Experimental results show an average of 82% F-measure on four different product review domains.",
"title": ""
},
{
"docid": "c3f3ed8a363d8dcf9ac1efebfa116665",
"text": "We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., \"Close the drawer\" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as \"Liz told you the story.\" These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.",
"title": ""
},
{
"docid": "c60c83c93577377bad43ed1972079603",
"text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "c0ae84c759f20ac8eb7f93c28d4f3835",
"text": "The first part of this paper summarises the key points about the use of celebrities in advertising, sets this particular creative technique in context and demonstrates how significant its return on investment can be. In the second part the paper goes on to report a more detailed analysis of the ‘celebrity’ case histories among the winners in the IPA Effectiveness Awards, and how practitioners have applied celebrity use to brands to make exceptional impacts on profitability. DEFINITIONS ’Advertising’ Throughout this paper the word ‘advertising’ has the sense that the general public gives it, that is ‘anything that has a name on it is advertising’. This consumer definition results from extensive qualitative research (Ford-Hutchinson and Rothwell, 2002) conducted in 2002 by the Advertising Standards Authority (ASA), the UK advertising self-regulatory body. Its simplicity and directness reminds one that, while the industry sees itself promoting brands in a whole host of different ways, it is all ‘advertising’ from the customer’s point of view. Within the ‘marcoms’ industry, practitioners tend to segment these activities into particular niches and refer to the agencies that specialise in them as being in the creative, media, direct marketing, self-promotion, public relations, sponsorship, digital, new media and outdoor sectors, to name just a few. It would be very long-winded to list all these specialisms every time, and so the word ‘advertising’ will be used instead. Occasionally, and for variety, the words ‘marcoms’, ‘marketing communications’ or ‘commercial communications’ are employed instead of ‘advertising’. These terms are used interchangeably to signify all the means by which brands are promoted by Hamish Pringle Director General, IPA, 44 Belgrave Square, London, SW1X 8QS Tel: 020 7201 8201; 07977 269778 (m) e-mail: hamish@ ipa.co.uk",
"title": ""
},
{
"docid": "27bf341c8c91713b5b9ebed84f78c92b",
"text": "The Agile Manifesto and Agile Principles are typically referred to as the definitions of \"agile\" and \"agility\". There is research on agile values and agile practises, but how should “Scaled Agility” be defined, and what might be the characteristics and principles of Scaled Agile? This paper examines the characteristics of scaled agile, and the principles that are used to build up such agility. It also gives suggestions as principles upon which Scaled Agility can be built.",
"title": ""
},
{
"docid": "e0f89b22f215c140f69a22e6b573df41",
"text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.",
"title": ""
},
{
"docid": "a94f066ec5db089da7fd19ac30fe6ee3",
"text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.",
"title": ""
},
{
"docid": "d70235bc7fb94e1e3d1d301f8d1835cb",
"text": "How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron–electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.",
"title": ""
},
{
"docid": "c5628c76f448fb71165069aefc75a2c4",
"text": "This research work aims to design and develop a wireless food ordering system in the restaurant. The project presents in-depth on the technical operation of the Wireless Ordering System (WOS) including systems architecture, function, limitations and recommendations. It is believed that with the increasing use of handheld device e.g PDAs in restaurants, pervasive application will become an important tool for restaurants to improve the management aspect by utilizing PDAs to coordinate food ordering could increase efficiency for restaurants and caterers by saving time, reducing human errors and by providing higher quality customer service. With the combination of simple design and readily available emerging communications technologies, it can be concluded that this system is an attractive solution for the hospitality industry.",
"title": ""
},
{
"docid": "d2f8f98289b59c3ff7c3fd3ec4599945",
"text": "Massive public resume data emerging on the internet indicates individual-related characteristics in terms of profile and career experiences. Resume Analysis (RA) provides opportunities for many applications, such as recruitment trend predict, talent seeking and evaluation. Existing RA studies either largely rely on the knowledge of domain experts, or leverage classic statistical or data mining models to identify and filter explicit attributes based on pre-defined rules. However, they fail to discover the latent semantic information from semi-structured resume text, i.e., individual career progress trajectory and social-relations, which are otherwise vital to comprehensive understanding of people’s career evolving patterns. Besides, when dealing with large numbers of resumes, how to properly visualize such semantic information to reduce the information load and to support better human cognition is also challenging.\n To tackle these issues, we propose a visual analytics system called ResumeVis to mine and visualize resume data. First, a text mining-based approach is presented to extract semantic information. Then, a set of visualizations are devised to represent the semantic information in multiple perspectives. Through interactive exploration on ResumeVis performed by domain experts, the following tasks can be accomplished: to trace individual career evolving trajectory; to mine latent social-relations among individuals; and to hold the full picture of massive resumes’ collective mobility. Case studies with over 2,500 government officer resumes demonstrate the effectiveness of our system.",
"title": ""
},
{
"docid": "8858053a805375aba9d8e71acfd7b826",
"text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.",
"title": ""
}
] |
scidocsrr
|
d66efb72f65731b2c038286914adc689
|
Lumped-Element Fully Tunable Bandstop Filters for Cognitive Radio Applications
|
[
{
"docid": "e5e1146fd0704357d865574da45ab2e5",
"text": "This paper presents a compact low-loss tunable X-band bandstop filter implemented on a quartz substrate using both miniature RF microelectromechanical systems (RF-MEMS) capacitive switches and GaAs varactors. The two-pole filter is based on capacitively loaded folded-λ/2 resonators that are coupled to a microstrip line, and the filter analysis includes the effects of nonadjacent inter-resonator coupling. The RF-MEMS filter tunes from 11.34 to 8.92 GHz with a - 20-dB rejection bandwidth of 1.18%-3.51% and a filter quality factor of 60-135. The GaAs varactor loaded filter tunes from 9.56 to 8.66 GHz with a - 20-dB bandwidth of 1.65%-2% and a filter quality factor of 55-90. Nonlinear measurements at the filter null with Δf = 1 MHz show that the RF-MEMS loaded filter results in > 25-dBm higher third-order intermodulation intercept point and P-1 dB compared with the varactor loaded filter. Both filters show high rejection levels ( > 24 dB) and low passband insertion loss ( <; 0.8 dB) from dc to the first spurious response at 19.5 GHz. The filter topology can be extended to higher order designs with an even number of poles.",
"title": ""
}
] |
[
{
"docid": "24d77eb4ea6ecaa44e652216866ab8c8",
"text": "In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.",
"title": ""
},
{
"docid": "167dbfaa3b6db3fec5d9f83aacdcbfe8",
"text": "Implementing a Natural Language Processing (NLP) system requires considerable engineering effort: creating data-structures to represent language constructs; reading corpora annotations into these data-structures; applying off-the-shelf NLP tools to augment the text representation; extracting features and training machine learning components; conducting experiments and computing performance statistics; and creating the end-user application that integrates the implemented components. While there are several widely used NLP libraries, each provides only partial coverage of these various tasks. We present our library COGCOMPNLP which simplifies the process of design and development of NLP applications by providing modules to address different challenges: we provide a corpus-reader module that supports popular corpora in the NLP community, a module for various low-level data-structures and operations (such as search over text), a module for feature extraction, and an extensive suite of annotation modules for a wide range of semantic and syntactic tasks. These annotation modules are all integrated in a single system, PIPELINE, which allows users to easily use the annotators with simple direct calls using any JVM-based language, or over a network. The sister project COGCOMPNLPY enables users to access the annotators with a Python interface. We give a detailed account of our system’s structure and usage, and where possible, compare it with other established NLP frameworks. We report on the performance, including time and memory statistics, of each component on a selection of well-established datasets. Our system is publicly available for research use and external contributions, at: http://github.com/CogComp/cogcomp-nlp.",
"title": ""
},
{
"docid": "12680d4fcf57a8a18d9c2e2b1107bf2d",
"text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.",
"title": ""
},
{
"docid": "75961ecd0eadf854ad9f7d0d76f7e9c8",
"text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.",
"title": ""
},
{
"docid": "cf1d8589fb42bd2af21e488e3ea79765",
"text": "This paper presents ProRace, a dynamic data race detector practical for production runs. It is lightweight, but still offers high race detection capability. To track memory accesses, ProRace leverages instruction sampling using the performance monitoring unit (PMU) in commodity processors. Our PMU driver enables ProRace to sample more memory accesses at a lower cost compared to the state-of-the-art Linux driver. Moreover, ProRace uses PMU-provided execution contexts including register states and program path, and reconstructs unsampled memory accesses offline. This technique allows \\ProRace to overcome inherent limitations of sampling and improve the detection coverage by performing data race detection on the trace with not only sampled but also reconstructed memory accesses. Experiments using racy production software including apache and mysql shows that, with a reasonable offline cost, ProRace incurs only 2.6% overhead at runtime with 27.5% detection probability with a sampling period of 10,000.",
"title": ""
},
{
"docid": "f9ff8dbf8537dffd40ccd938dcb758a8",
"text": "In this paper, we propose to cryptanalyse an encryption algorithm which combines a DNA addition and a chaotic map to encrypt a gray scale image. Our contribution consists on, at first, demonstrating that the algorithm, as it is described, is non-invertible, which means that the receiver cannot decrypt the ciphered image even if he posses the secret key. Then, a chosen plaintext attack on the invertible encryption block is described, where, the attacker can illegally decrypt the ciphered image by a temporary access to the encryption machinery.",
"title": ""
},
{
"docid": "b95190b1139935bdc40634fe0650a51c",
"text": "Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017b) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017b), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.",
"title": ""
},
{
"docid": "89eee86640807e11fa02d0de4862b3a5",
"text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.",
"title": ""
},
{
"docid": "779a8cf77a038dd2d0f852e3bd6e78fe",
"text": "Systematic reviews are generally placed above narrative reviews in an assumed hierarchy of secondary research evidence. We argue that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary. Conventional systematic reviews address narrowly focused questions; their key contribution is summarising data. Narrative reviews provide interpretation and critique; their key contribution is deepening understanding. This article is protected by copyright. All rights reserved.",
"title": ""
},
{
"docid": "3886d46c2420216f5950cfc22597c82e",
"text": "In this article, we describe a new approach to enhance driving safety via multi-media technologies by recognizing and adapting to drivers’ emotions with multi-modal intelligent car interfaces. The primary objective of this research was to build an affectively intelligent and adaptive car interface that could facilitate a natural communication with its user (i.e., the driver). This objective was achieved by recognizing drivers’ affective states (i.e., emotions experienced by the drivers) and by responding to those emotions by adapting to the current situation via an affective user model created for each individual driver. A controlled experiment was designed and conducted in a virtual reality environment to collect physiological data signals (galvanic skin response, heart rate, and temperature) from participants who experienced driving-related emotions and states (neutrality, panic/fear, frustration/anger, and boredom/sleepiness). k-Nearest Neighbor (KNN), Marquardt-Backpropagation (MBP), and Resilient Backpropagation (RBP) Algorithms were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. RBP was the best classifier of these three emotions with 82.6% accuracy, followed by MBP with 73.26% and by KNN with 65.33%. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users’ negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the user model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as personality traits and preferences. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bdf191e0f2b06f13da05a08f34901459",
"text": "This paper presents a deduplication storage system over cloud computing. Our deduplication storage system consists of two major components, a front-end deduplication application and Hadoop Distributed File System. Hadoop Distributed File System is common back-end distribution file system, which is used with a Hadoop database. We use Hadoop Distributed File System to build up a mass storage system and use a Hadoop database to build up a fast indexing system. With the deduplication applications, a scalable and parallel deduplicated cloud storage system can be effectively built up. We further use VMware to generate a simulated cloud environment. The simulation results demonstrate that our deduplication cloud storage system is more efficient than traditional deduplication approaches.",
"title": ""
},
{
"docid": "194bea0d713d5d167e145e43b3c8b4e2",
"text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.",
"title": ""
},
{
"docid": "03e7d909183b66cc3b45eed6ac2de9dd",
"text": "A s the millennium draws to a close, it is apparent that one question towers above all others in the life sciences: How does the set of processes we call mind emerge from the activity of the organ we call brain? The question is hardly new. It has been formulated in one way or another for centuries. Once it became possible to pose the question and not be burned at the stake, it has been asked openly and insistently. Recently the question has preoccupied both the experts—neuroscientists, cognitive scientists and philosophers—and others who wonder about the origin of the mind, specifically the conscious mind. The question of consciousness now occupies center stage because biology in general and neuroscience in particular have been so remarkably successful at unraveling a great many of life’s secrets. More may have been learned about the brain and the mind in the 1990s—the so-called decade of the brain—than during the entire previous history of psychology and neuroscience. Elucidating the neurobiological basis of the conscious mind—a version of the classic mind-body problem—has become almost a residual challenge. Contemplation of the mind may induce timidity in the contemplator, especially when consciousness becomes the focus of the inquiry. Some thinkers, expert and amateur alike, believe the question may be unanswerable in principle. For others, the relentless and exponential increase in new knowledge may give rise to a vertiginous feeling that no problem can resist the assault of science if only the theory is right and the techniques are powerful enough. The debate is intriguing and even unexpected, as no comparable doubts have been raised over the likelihood of explaining how the brain is responsible for processes such as vision or memory, which are obvious components of the larger process of the conscious mind. The multimedia mind-show occurs constantly as the brain processes external and internal sensory events. As the brain answers the unasked question of who is experiencing the mindshow, the sense of self emerges. by Antonio R. Damasio",
"title": ""
},
{
"docid": "cfa6b417658cfc1b25200a8ff578ed2c",
"text": "The Learning Analytics (LA) discipline analyzes educational data obtained from student interaction with online resources. Most of the data is collected from Learning Management Systems deployed at established educational institutions. In addition, other learning platforms, most notably Massive Open Online Courses such as Udacity and Coursera or other educational initiatives such as Khan Academy, generate large amounts of data. However, there is no generally agreedupon data model for student interactions. Thus, analysis tools must be tailored to each system's particular data structure, reducing their interoperability and increasing development costs. Some e-Learning standards designed for content interoperability include data models for gathering student performance information. In this paper, we describe how well-known LA tools collect data, which we link to how two e-Learning standards - IEEE Standard for Learning Technology and Experience API - define their data models. From this analysis, we identify the advantages of using these e-Learning standards from the point of view of Learning Analytics.",
"title": ""
},
{
"docid": "d522f9a8b0d2a870a8142e20acff5028",
"text": "Node-list and N-list, two novel data structure proposed in recent years, have been proven to be very efficient for mining frequent itemsets. The main problem of these structures is that they both need to encode each node of a PPC-tree with pre-order and post-order code. This causes that they are memory consuming and inconvenient to mine frequent itemsets. In this paper, we propose Nodeset, a more efficient data structure, for mining frequent itemsets. Nodesets require only the pre-order (or post-order code) of each node, which makes it saves half of memory compared with N-lists and Node-lists. Based on Nodesets, we present an efficient algorithm called FIN to mining frequent itemsets. For evaluating the performance of FIN, we have conduct experiments to compare it with PrePost and FP-growth ⁄ , two state-of-the-art algorithms, on a variety of real and synthetic datasets. The experimental results show that FIN is high performance on both running time and memory usage. Frequent itemset mining, first proposed by Agrawal, Imielinski, and Swami (1993), has become a fundamental task in the field of data mining because it has been widely used in many important data mining tasks such as mining associations, correlations, episodes , and etc. Since the first proposal of frequent itemset mining, hundreds of algorithms have been proposed on various kinds of extensions and applications, ranging from scalable data mining methodologies, to handling a wide diversity of data types, various extended mining tasks, and a variety of new applications (Han, Cheng, Xin, & Yan, 2007). In recent years, we present two data structures called Node-list (Deng & Wang, 2010) and N-list (Deng, Wang, & Jiang, 2012) for facilitating the mining process of frequent itemsets. Both structures use nodes with pre-order and post-order to represent an itemset. Based on Node-list and N-list, two algorithms called PPV (Deng & Wang, 2010) and PrePost (Deng et al., 2012) are proposed, respectively for mining frequent itemsets. The high efficiency of PPV and PrePost is achieved by the compressed characteristic of Node-lists and N-lists. However, they are memory-consuming because Node-lists and N-lists need to encode a node with pre-order and post-order. In addition, the nodes' code model of Node-list and N-list is not suitable to join Node-lists or N-lists of two short itemsets to generate the Node-list or N-list of a long itemset. This may affect the efficiency of corresponding algorithms. Therefore, how to design an efficient data structure without …",
"title": ""
},
{
"docid": "d0b509f5776f7cdf3c4a108e0dfafd47",
"text": "Motivated by the recent success in applying deep learning for natural image analysis, we designed an image segmentation system based on deep Convolutional Neural Network (CNN) to detect the presence of soft tissue sarcoma from multi-modality medical images, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). Multi-modality imaging analysis using deep learning has been increasingly applied in the field of biomedical imaging and brought unique value to medical applications. However, it is still challenging to perform the multi-modal analysis owing to a major difficulty that is how to fuse the information derived from different modalities. There exist varies of possible schemes which are application-dependent and lack of a unified framework to guide their designs. Aiming at lesion segmentation with multi-modality images, we innovatively propose a conceptual image fusion architecture for supervised biomedical image analysis. The architecture has been optimized by testing different fusion schemes within the CNN structure, including fusing at the feature learning level, fusing at the classifier level, and the fusing at the decision-making level. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but can also suffer from the decreased robustness due to the presence of large errors in one or more image modalities.",
"title": ""
},
{
"docid": "6c221c4085c6868640c236b4dd72f777",
"text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.",
"title": ""
},
{
"docid": "c92593172fafc266a67a049bd95082dc",
"text": "The goals of the present study were to apply a generalized regression model and support vector machine (SVM) models with Shape Signatures descriptors, to the domain of blood–brain barrier (BBB) modeling. The Shape Signatures method is a novel computational tool that was used to generate molecular descriptors utilized with the SVM classification technique with various BBB datasets. For comparison purposes we have created a generalized linear regression model with eight MOE descriptors and these same descriptors were also used to create SVM models. The generalized regression model was tested on 100 molecules not in the model and resulted in a correlation r 2 = 0.65. SVM models with MOE descriptors were superior to regression models, while Shape Signatures SVM models were comparable or better than those with MOE descriptors. The best 2D shape signature models had 10-fold cross validation prediction accuracy between 80–83% and leave-20%-out testing prediction accuracy between 80–82% as well as correctly predicting 84% of BBB+ compounds (n = 95) in an external database of drugs. Our data indicate that Shape Signatures descriptors can be used with SVM and these models may have utility for predicting blood–brain barrier permeation in drug discovery.",
"title": ""
},
{
"docid": "c5d74c69c443360d395a8371055ef3e2",
"text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.",
"title": ""
},
{
"docid": "abc1a53ea5e3d3fc7a4b45cbb64c6bca",
"text": "This paper proposes a method to measure the junction temperatures of insulated-gate bipolar transistors (IGBTs) during the converter operation for prototype evaluation. The IGBT short-circuit current is employed as the temperature-sensitive electrical parameter (TSEP). The calibration experiments show that the short-circuit current has an adequate temperature sensitivity of 0.35 A/°C. The parameter also has good selectivity and linearity, which makes it suitable to be used as a TSEP. Test circuit and hardware design are proposed for the IGBT junction temperature measurement in various power electronics dc-dc and ac-dc converter applications. By connecting a temperature measurement unit to the converter and giving a short-circuit pulse during the converter operation, the short-circuit current is measured, and the IGBT junction temperature can be derived from the calibration curve. The proposed temperature measurement method is a valuable tool for prototype evaluation and avoids the unnecessary safety margin regarding device operating temperatures, which is significant particularly for high-temperature/high-density converter applications.",
"title": ""
}
] |
scidocsrr
|
ef427326e58607a355a1439dbac0211f
|
Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel
|
[
{
"docid": "a151954567e5f24a91d86b07a897888f",
"text": "In software testing, a set of test cases is constructed according to some predeened selection criteria. The software is then examined against these test cases. Three interesting observations have been made on the current artifacts of software testing. Firstly, an error-revealing test case is considered useful while a successful test case which does not reveal software errors is usually not further investigated. Whether these successful test cases still contain useful information for revealing software errors has not been properly studied. Secondly, no matter how extensive the testing has been conducted in the development phase, errors may still exist in the software 5. These errors, if left undetected, may eventually cause damage to the production system. The study of techniques for uncovering software errors in the production phase is seldom addressed in the literature. Thirdly, as indicated by Weyuker in 66, the availability of test oracles is pragmatically unattainable in most situations. However, the availability of test oracles is generally assumed in conventional software testing techniques. In this paper, we propose a novel test case selection technique that derives new test cases from the successful ones. The selection aims at revealing software errors that are possibly left undetected in successful test cases which may be generated using some existing strategies. As such, the proposed technique augments the eeectiveness of existing test selection strategies. The technique also helps uncover software errors in the production phase and can be used in the absence of test oracles.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
}
] |
[
{
"docid": "673fea40e5cb12b54cc296b1a2c98ddb",
"text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.",
"title": ""
},
{
"docid": "c61f68104b2d058acb0d16c89e4b1454",
"text": "Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper parameter that is different from a continuous-valued parameter to specify the strength of the regularization.",
"title": ""
},
{
"docid": "42c2e599dbbb00784e2a6837ebd17ade",
"text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ec11d0b10af5507c18d918edb42a9ab8",
"text": "Traditional way of manual meter reading was not only waste of human and material resources, but also very inconvenient. Especially with the emergence of a number of high residential in recent years, this traditional way of water management was obviously inefficient. Cable automatic meter reading system is very vulnerable and it needs a heavy workload of construction wiring. In this paper, based on the study of existed water meters, a kind of design schema of wireless smart water meter was introduced. In the system, the main communication way is based on Zigbee technology. This kind of design schema is appropriate for the modern water management and the efficiency can be improved.",
"title": ""
},
{
"docid": "fd51405c809d617663d1520921645529",
"text": "As the conversions between the sensing element and interface in the feedback loop and forward path are nonlinear, harmonic distortions appear in the output spectrum, which will decrease the signal-to-noise and distortion ratio. Nonlinear distortions are critical for a high-resolution electromechanical sigma-delta (ΣΔ) modulator. However, there exists no detailed analysis approach to derive harmonic distortion results in the output signal for electromechanical ΣΔ modulators. In this paper, we employ a nonlinear op-amp dc gain model to derive the nonlinear displacement to voltage conversion in the forward path, and the nonlinear electrostatic feedback force on the proof mass is also computed. Based on a linear approximation of the modulator in the back end, the harmonic distortion model in the output spectrum of the proposed fifth-order electromechanical ΣΔ modulator is derived as a function of system parameters. The proposed nonlinear distortion models are verified by simulation results and experimental results.",
"title": ""
},
{
"docid": "fd2b1d2a4d44f0535ceb6602869ffe1c",
"text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.",
"title": ""
},
{
"docid": "1cacfd4da5273166debad8a6c1b72754",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "66044816ca1af0198acd27d22e0e347e",
"text": "BACKGROUND\nThe Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender.\n\n\nMETHODS\nA sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores.\n\n\nRESULTS\nThe CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores.\n\n\nCONCLUSION\nResults suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.",
"title": ""
},
{
"docid": "557c57b798d05565d49faf6299dea368",
"text": "Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.",
"title": ""
},
{
"docid": "7bfd3237b1a4c3c651b4c5389019f190",
"text": "Recent developments in web technologies including evolution of web standards, improvements in browser performance, and the emergence of free and open-source software (FOSS) libraries are driving a general shift from server-side to client-side web applications where a greater share of the computational load is transferred to the browser. Modern client-side approaches allow for improved user interfaces that rival traditional desktop software, as well as the ability to perform simulations and visualizations within the browser. We demonstrate the use of client-side technologies to create an interactive web application for a simulation model of biochemical oxygen demand and dissolved oxygen in rivers called the Webbased Interactive River Model (WIRM). We discuss the benefits, limitations and potential uses of client-side web applications, and provide suggestions for future research using new and upcoming web technologies such as offline access and local data storage to create more advanced client-side web applications for environmental simulation modeling. 2014 Elsevier Ltd. All rights reserved. Software availability Product Title: Web-based Interactive River Model (WIRM) Developer: Jeffrey D. Walker Contact Address: Dept. of Civil and Environmental Engineering, Tufts University, 200 College Ave, Medford, MA 02155 Contact E-mail: jeffrey.walker@tufts.edu Available Since: 2013 Programming Language: JavaScript, Python Availability: http://wirm.walkerjeff.com/ Cost: Free",
"title": ""
},
{
"docid": "7fcfe0535a0f7c645722e75648eb1bf3",
"text": "The performance of a differential <italic>LC</italic> oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, <inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula>-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In this paper, we demonstrate that additional inductors are not strictly necessary by showing that common-mode resonance can be obtained using a single tank. We present an NMOS architecture that uses a single differential inductor and a CMOS design that uses a single transformer. Prototypes are presented that achieve figure-of-merits of 192 and 195 dBc/Hz, respectively.",
"title": ""
},
{
"docid": "0f17ef896aadebec6fcfc46a17a7f793",
"text": "Modern malware often hide the malicious portion of their program code by making it appear as data at compile-time and transforming it back into executable code at runtime. This obfuscation technique poses obstacles to researchers who want to understand the malicious behavior of new or unknown malware and to practitioners who want to create models of detection and methods of recovery. In this paper we propose a technique for automating the process of extracting the hidden-code bodies of this class of malware. Our approach is based on the observation that sequences of packed or hidden code in a malware instance can be made self-identifying when its runtime execution is checked against its static code model. In deriving our technique, we formally define the unpack-executing behavior that such malware exhibits and devise an algorithm for identifying and extracting its hidden-code. We also provide details of the implementation and evaluation of our extraction technique; the results from our experiments on several thousand malware binaries show our approach can be used to significantly reduce the time required to analyze such malware, and to improve the performance of malware detection tools.",
"title": ""
},
{
"docid": "a981db3aa149caec10b1824c82840782",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "71cac5680dafbc3c56dbfffa4472b67a",
"text": "Three-dimensional printing has significant potential as a fabrication method in creating scaffolds for tissue engineering. The applications of 3D printing in the field of regenerative medicine and tissue engineering are limited by the variety of biomaterials that can be used in this technology. Many researchers have developed novel biomaterials and compositions to enable their use in 3D printing methods. The advantages of fabricating scaffolds using 3D printing are numerous, including the ability to create complex geometries, porosities, co-culture of multiple cells, and incorporate growth factors. In this review, recently-developed biomaterials for different tissues are discussed. Biomaterials used in 3D printing are categorized into ceramics, polymers, and composites. Due to the nature of 3D printing methods, most of the ceramics are combined with polymers to enhance their printability. Polymer-based biomaterials are 3D printed mostly using extrusion-based printing and have a broader range of applications in regenerative medicine. The goal of tissue engineering is to fabricate functional and viable organs and, to achieve this, multiple biomaterials and fabrication methods need to be researched.",
"title": ""
},
{
"docid": "d94a168efb4b884a608f499ffb794496",
"text": "We describe how upper limb amputees can be made to experience a rubber hand as part of their own body. This was accomplished by applying synchronous touches to the stump, which was out of view, and to the index finger of a rubber hand, placed in full view (26 cm medial to the stump). This elicited an illusion of sensing touch on the artificial hand, rather than on the stump and a feeling of ownership of the rubber hand developed. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioural data in the form of misreaching in a pointing task when asked to localize the position of the touch, and physiological evidence obtained by skin conductance responses when threatening the hand prosthesis. Our findings outline a simple method for transferring tactile sensations from the stump to a prosthetic limb by tricking the brain, thereby making an important contribution to the field of neuroprosthetics where a major goal is to develop artificial limbs that feel like a real parts of the body.",
"title": ""
},
{
"docid": "4e69f2a69c1063e15b85350eeafc868d",
"text": "Autism spectrum disorders (ASD) are largely characterized by deficits in imitation, pragmatic language, theory of mind, and empathy. Previous research has suggested that a dysfunctional mirror neuron system may explain the pathology observed in ASD. Because EEG oscillations in the mu frequency (8-13 Hz) over sensorimotor cortex are thought to reflect mirror neuron activity, one method for testing the integrity of this system is to measure mu responsiveness to actual and observed movement. It has been established that mu power is reduced (mu suppression) in typically developing individuals both when they perform actions and when they observe others performing actions, reflecting an observation/execution system which may play a critical role in the ability to understand and imitate others' behaviors. This study investigated whether individuals with ASD show a dysfunction in this system, given their behavioral impairments in understanding and responding appropriately to others' behaviors. Mu wave suppression was measured in ten high-functioning individuals with ASD and ten age- and gender-matched control subjects while watching videos of (1) a moving hand, (2) a bouncing ball, and (3) visual noise, or (4) moving their own hand. Control subjects showed significant mu suppression to both self and observed hand movement. The ASD group showed significant mu suppression to self-performed hand movements but not to observed hand movements. These results support the hypothesis of a dysfunctional mirror neuron system in high-functioning individuals with ASD.",
"title": ""
},
{
"docid": "24e943940f1bd1328dba1de2e15d3137",
"text": "The use of external databases to generate training data, also known as Distant Supervision, has become an effective way to train supervised relation extractors but this approach inherently suffers from noise. In this paper we propose a method for noise reduction in distantly supervised training data, using a discriminative classifier and semantic similarity between the contexts of the training examples. We describe an active learning strategy which exploits hierarchical clustering of the candidate training samples. To further improve the effectiveness of this approach, we study the use of several methods for dimensionality reduction of the training samples. We find that semantic clustering of training data combined with cluster-based active learning allows filtering the training data, hence facilitating the creation of a clean training set for relation extraction, at a reduced manual labeling cost.",
"title": ""
},
{
"docid": "dc1bd4603d9673fb4cd0fd9d7b0b6952",
"text": "We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent on average. Option market price discovery is related to trading volume and spreads in both markets, and stock volatility. Price discovery across option strike prices is related to leverage, trading volume, and spreads. Our results are consistent with theoretical arguments that informed investors trade in both stock and option markets, suggesting an important informational role for options. ∗Chakravarty is from Purdue University; Gulen is from the Pamplin College of Business, Virginia Tech; and Mayhew is from the Terry College of Business, University of Georgia and the U.S. Securities and Exchange Commission. We would like to thank the Institute for Quantitative Research in Finance (the Q-Group) for funding this research. Gulen acknowledges funding from a Virginia Tech summer grant and Mayhew acknowledges funding from the TerrySanford Research Grant at the Terry College of Business and from the University of Georgia Research Foundation. We would like to thank the editor, Rick Green; Michael Cliff; Joel Hasbrouck; Raman Kumar; an anonymous referee; and seminar participants at Purdue University, the University of Georgia, Texas Christian University, the University of South Carolina, the Securities and Exchange Commission, the University of Delaware, George Washington University, the Commodity Futures Trading Commission, the Batten Conference at the College of William and Mary, the 2002 Q-Group Conference, and the 2003 INQUIRE conference. The U.S. Securities and Exchange Commission disclaims responsibility for any private publication or statement of any SEC employee or Commissioner. This study expresses the author’s views and does not necessarily reflect those of the Commission, the Commissioners, or other members of the staff.",
"title": ""
},
{
"docid": "b908987c5bae597683f177beb2bba896",
"text": "This paper presents a novel task of cross-language authorship attribution (CLAA), an extension of authorship attribution task to multilingual settings: given data labelled with authors in language X , the objective is to determine the author of a document written in language Y , where X 6= Y . We propose a number of cross-language stylometric features for the task of CLAA, such as those based on sentiment and emotional markers. We also explore an approach based on machine translation (MT) with both lexical and cross-language features. We experimentally show that MT could be used as a starting point to CLAA, since it allows good attribution accuracy to be achieved. The cross-language features provide acceptable accuracy while using jointly with MT, though do not outperform lexical",
"title": ""
}
] |
scidocsrr
|
9682be37139cd83d4b18eb6222e43533
|
Capacitive Biopotential Measurement for Electrophysiological Signal Acquisition: A Review
|
[
{
"docid": "991ab90963355f16aa2a83655577ba54",
"text": "Highly durable, flexible, and even washable multilayer electronic circuitry can be constructed on textile substrates, using conductive yarns and suitably packaged components. In this paper we describe the development of e-broidery (electronic embroidery, i.e., the patterning of conductive textiles by numerically controlled sewing or weaving processes) as a means of creating computationally active textiles. We compare textiles to existing flexible circuit substrates with regard to durability, conformability, and wearability. We also report on: some unique applications enabled by our work; the construction of sensors and user interface elements in textiles; and a complete process for creating flexible multilayer circuits on fabric substrates. This process maintains close compatibility with existing electronic components and design tools, while optimizing design techniques and component packages for use in textiles. E veryone wears clothing. It conveys a sense of the wearer's identity, provides protection from the environment, and supplies a convenient way to carry all the paraphernalia of daily life. Of course, clothing is made from textiles, which are themselves among the first composite materials engineered by humans. Textiles have mechanical, aesthetic, and material advantages that make them ubiquitous in both society and industry. The woven structure of textiles and spun fibers makes them durable, washable, and conformal, while their composite nature affords tremendous variety in their texture, for both visual and tactile senses. Sadly, not everyone wears a computer, although there is presently a great deal of interest in \" wear-able computing. \" 1 Wearable computing may be seen as the result of a design philosophy that integrates embedded computation and sensing into everyday life to give users continuous access to the capabilities of personal computing. Ideally, computers would be as convenient, durable, and comfortable as clothing, but most wearable computers still take an awkward form that is dictated by the materials and processes traditionally used in electronic fabrication. The design principle of packaging electronics in hard plastic boxes (no matter how small) is pervasive, and alternatives are difficult to imagine. As a result, most wearable computing equipment is not truly wearable except in the sense that it fits into a pocket or straps onto the body. What is needed is a way to integrate technology directly into textiles and clothing. Furthermore, textile-based computing is not limited to applications in wearable computing; in fact, it is broadly applicable to ubiquitous computing, allowing the integration of interactive elements into furniture and decor in general. In …",
"title": ""
},
{
"docid": "c2482e67cb4db7ee888b56d952ce76c2",
"text": "To obtain maximum unobtrusiveness with sensors for monitoring health parameters on the human body, two technical solutions are combined. First we propose contactless sensors for capacitive electromyography measurements. Secondly, the sensors are integrated into textile, so complete fusion with a wearable garment is enabled. We are presenting the first successful measurements with such sensors. Keywords— surface electromyography, capacitive transducer, embroidery, textile electronics, interconnect",
"title": ""
},
{
"docid": "4a5d4db892145324597bd8d6b98c009f",
"text": "Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. M. Chen · S. Gonzalez · H. Cao · V. C. M. Leung Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada M. Chen School of Computer Science and Engineering, Seoul National University, Seoul, South Korea A. Vasilakos (B) Department of Computer and Telecommunications Engineering, University of Western Macedonia, Macedonia, Greece e-mail: vasilako@ath.forthnet.gr",
"title": ""
}
] |
[
{
"docid": "eb7c34c4959c39acb18fc5920ff73dba",
"text": "Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops.Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies: laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff].",
"title": ""
},
{
"docid": "c7539441ff7076fa32074ed0ed314e38",
"text": "Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN.",
"title": ""
},
{
"docid": "66ab561342d6f0c80a0eb8d4c2b19a97",
"text": "Impedance spectroscopy of biological cells has been used to monitor cell status, e.g. cell proliferation, viability, etc. It is also a fundamental method for the study of the electrical properties of cells which has been utilised for cell identification in investigations of cell behaviour in the presence of an applied electric field, e.g. electroporation. There are two standard methods for impedance measurement on cells. The use of microelectrodes for single cell impedance measurement is one method to realise the measurement, but the variations between individual cells introduce significant measurement errors. Another method to measure electrical properties is by the measurement of cell suspensions, i.e. a group of cells within a culture medium or buffer. This paper presents an investigation of the impedance of normal and cancerous breast cells in suspension using the Maxwell-Wagner mixture theory to analyse the results and extract the electrical parameters of a single cell. The results show that normal and different stages of cancer breast cells can be distinguished by the conductivity presented by each cell.",
"title": ""
},
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "2bd9f317404d556b5967e6dcb6832b1b",
"text": "Ischemic Heart Disease (IHD) and stroke are statistically the leading causes of death world-wide. Both diseases deal with various types of cardiac arrhythmias, e.g. premature ventricular contractions (PVCs), ventricular and supra-ventricular tachycardia, atrial fibrillation. For monitoring and detecting such an irregular heart rhythm accurately, we are now developing a very cost-effective ECG monitor, which is implemented in 8-bit MCU with an efficient QRS detector using steep-slope algorithm and arrhythmia detection algorithm using a simple heart rate variability (HRV) parameter. This work shows the results of evaluating the real-time steep-slope algorithm using MIT-BIH Arrhythmia Database. The performance of this algorithm has 99.72% of sensitivity and 99.19% of positive predictivity. We then show the preliminary results of arrhythmia detection using various types of normal and abnormal ECGs from an ECG simulator. The result is, 18 of 20 ECG test signals were correctly detected.",
"title": ""
},
{
"docid": "fc25e19d03a6686a0829a823d97cedbe",
"text": "OBJECTIVE\nThe problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD).\n\n\nMETHODS\nA relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a \"leave-n-out\" randomized permutation cross-validation procedure.\n\n\nRESULTS\nA list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%].\n\n\nCONCLUSIONS\nThese results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG.\n\n\nSIGNIFICANCE\nThe proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "0116f7792bfcd4d675056628544801fb",
"text": "Over the last few years, Cloud storage systems and so-called NoSQL datastores have found widespread adoption. In contrast to traditional databases, these storage systems typically sacrifice consistency in favor of latency and availability as mandated by the CAP theorem, so that they only guarantee eventual consistency. Existing approaches to benchmark these storage systems typically omit the consistency dimension or did not investigate eventuality of consistency guarantees. In this work we present a novel approach to benchmark staleness in distributed datastores and use the approach to evaluate Amazon's Simple Storage Service (S3). We report on our unexpected findings.",
"title": ""
},
{
"docid": "38bdfe23b1e62cd162ed18d741f9ba05",
"text": "The authors present results of 4 studies that seek to determine the discriminant and incremental validity of the 3 most widely studied traits in psychology-self-esteem, neuroticism, and locus of control-along with a 4th, closely related trait-generalized self-efficacy. Meta-analytic results indicated that measures of the 4 traits were strongly related. Results also demonstrated that a single factor explained the relationships among measures of the 4 traits. The 4 trait measures display relatively poor discriminant validity, and each accounted for little incremental variance in predicting external criteria relative to the higher order construct. In light of these results, the authors suggest that measures purporting to assess self-esteem, locus of control, neuroticism, and generalized self-efficacy may be markers of the same higher order concept.",
"title": ""
},
{
"docid": "bcda77a0de7423a2a4331ff87ce9e969",
"text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.",
"title": ""
},
{
"docid": "cec3d18ea5bd7eba435e178e2fcb38b0",
"text": "The synthesis of three-degree-of-freedom planar parallel manipulators is performed using a genetic algorithm. The architecture of a manipulator and its position and orientation with respect to a prescribed workspace are determined. The architectural parameters are optimized so that the manipulator’s constantorientation workspace is as close as possible to a prescribed workspace. The manipulator’s workspace is discretized and its dexterity is computed as a global property of the manipulator. An analytical expression of the singularity loci (local null dexterity) can be obtained from the Jacobian matrix determinant, and its intersection with the manipulator’s workspace may be verified and avoided. Results are shown for different conditions. First, the manipulators’ workspaces are optimized for a prescribed workspace, without considering whether the singularity loci intersect it or not. Then the same type of optimization is performed, taking intersections with the singularity loci into account. In the following results, the optimization of the manipulator’s dexterity is also included in an objective function, along with the workspace optimization and the avoidance of singularity loci. Results show that the end-effector’s location has a significant effect on the manipulator’s dexterity. ©2002 John Wiley & Sons, Inc.",
"title": ""
},
{
"docid": "aeadbf476331a67bec51d5d6fb6cc80b",
"text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance",
"title": ""
},
{
"docid": "c11e1e156835d98707c383711f4e3953",
"text": "We present an approach for automatically generating provably correct abstractions from C source code that are useful for practical implementation verification. The abstractions are easier for a human verification engineer to reason about than the implementation and increase the productivity of interactive code proof. We guarantee soundness by automatically generating proofs that the abstractions are correct.\n In particular, we show two key abstractions that are critical for verifying systems-level C code: automatically turning potentially overflowing machine-word arithmetic into ideal integers, and transforming low-level C pointer reasoning into separate abstract heaps. Previous work carrying out such transformations has either done so using unverified translations, or required significant proof engineering effort.\n We implement these abstractions in an existing proof-producing specification transformation framework named AutoCorres, developed in Isabelle/HOL, and demonstrate its effectiveness in a number of case studies. We show scalability on multiple OS microkernels, and we show how our changes to AutoCorres improve productivity for total correctness by porting an existing high-level verification of the Schorr-Waite algorithm to a low-level C implementation with minimal effort.",
"title": ""
},
{
"docid": "dae40fa32526bf965bad70f98eb51bb7",
"text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.",
"title": ""
},
{
"docid": "630901f1a1b25a5a2af65b566505de65",
"text": "In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.",
"title": ""
},
{
"docid": "0991b582ad9fcc495eb534ebffe3b5f8",
"text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.",
"title": ""
},
{
"docid": "8ce3fc72fa132b8baeff35035354d194",
"text": "Raman spectroscopy is a molecular vibrational spectroscopic technique that is capable of optically probing the biomolecular changes associated with diseased transformation. The purpose of this study was to explore near-infrared (NIR) Raman spectroscopy for identifying dysplasia from normal gastric mucosa tissue. A rapid-acquisition dispersive-type NIR Raman system was utilised for tissue Raman spectroscopic measurements at 785 nm laser excitation. A total of 76 gastric tissue samples obtained from 44 patients who underwent endoscopy investigation or gastrectomy operation were used in this study. The histopathological examinations showed that 55 tissue specimens were normal and 21 were dysplasia. Both the empirical approach and multivariate statistical techniques, including principal components analysis (PCA), and linear discriminant analysis (LDA), together with the leave-one-sample-out cross-validation method, were employed to develop effective diagnostic algorithms for classification of Raman spectra between normal and dysplastic gastric tissues. High-quality Raman spectra in the range of 800–1800 cm−1 can be acquired from gastric tissue within 5 s. There are specific spectral differences in Raman spectra between normal and dysplasia tissue, particularly in the spectral ranges of 1200–1500 cm−1 and 1600–1800 cm−1, which contained signals related to amide III and amide I of proteins, CH3CH2 twisting of proteins/nucleic acids, and the C=C stretching mode of phospholipids, respectively. The empirical diagnostic algorithm based on the ratio of the Raman peak intensity at 875 cm−1 to the peak intensity at 1450 cm−1 gave the diagnostic sensitivity of 85.7% and specificity of 80.0%, whereas the diagnostic algorithms based on PCA-LDA yielded the diagnostic sensitivity of 95.2% and specificity 90.9% for separating dysplasia from normal gastric tissue. Receiver operating characteristic (ROC) curves further confirmed that the most effective diagnostic algorithm can be derived from the PCA-LDA technique. Therefore, NIR Raman spectroscopy in conjunction with multivariate statistical technique has potential for rapid diagnosis of dysplasia in the stomach based on the optical evaluation of spectral features of biomolecules.",
"title": ""
},
{
"docid": "0df3d30837edd0e7809ed77743a848db",
"text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.",
"title": ""
},
{
"docid": "fb4d926254409df9d212b834d492271f",
"text": "Restrictive dermopathy (RD) is a rare, fatal, and genetically heterogeneous laminopathy with a predominant autosomal recessive heredity pattern. The phenotype can be caused by mutations in either LMNA (primary laminopathy) or ZMPSTE24 (secondary laminopathy) genes but mostly by homozygous or compound heterozygous ZMPSTE24 mutations. Clinicopathologic findings are unique, allowing a specific diagnosis in most cases. We describe a premature newborn girl of non-consanguineous parents who presented a rigid, translucent and tightly adherent skin, dysmorphic facies, multiple joint contractures and radiological abnormalities. The overall clinical, radiological, histological, and ultrastructural features were typical of restrictive dermopathy. Molecular genetic analysis revealed a homozygous ZMPSTE24 mutation (c.1085_1086insT). Parents and sister were heterozygous asymptomatic carriers. We conclude that RD is a relatively easy and consistent clinical and pathological diagnosis. Despite recent advances in our understanding of RD, the pathogenetic mechanisms of the disease are not entirely clarified. Recognition of RD and molecular genetic diagnosis are important to define the prognosis of an affected child and for recommending genetic counseling to affected families. However, the outcome for a live born patient in the neonatal period is always fatal.",
"title": ""
},
{
"docid": "0dc0815505f065472b3929792de638b4",
"text": "Our aim was to comprehensively validate the 1-min sit-to-stand (STS) test in chronic obstructive pulmonary disease (COPD) patients and explore the physiological response to the test.We used data from two longitudinal studies of COPD patients who completed inpatient pulmonary rehabilitation programmes. We collected 1-min STS test, 6-min walk test (6MWT), health-related quality of life, dyspnoea and exercise cardiorespiratory data at admission and discharge. We assessed the learning effect, test-retest reliability, construct validity, responsiveness and minimal important difference of the 1-min STS test.In both studies (n=52 and n=203) the 1-min STS test was strongly correlated with the 6MWT at admission (r=0.59 and 0.64, respectively) and discharge (r=0.67 and 0.68, respectively). Intraclass correlation coefficients (95% CI) between 1-min STS tests were 0.93 (0.83-0.97) for learning effect and 0.99 (0.97-1.00) for reliability. Standardised response means (95% CI) were 0.87 (0.58-1.16) and 0.91 (0.78-1.07). The estimated minimal important difference was three repetitions. End-exercise oxygen consumption, carbon dioxide output, ventilation, breathing frequency and heart rate were similar in the 1-min STS test and 6MWT.The 1-min STS test is a reliable, valid and responsive test for measuring functional exercise capacity in COPD patients and elicited a physiological response comparable to that of the 6MWT.",
"title": ""
}
] |
scidocsrr
|
1e68530f79ccd54495b8f842ea675cd3
|
Feasibility study of mobile phone WiFi detection in aerial search and rescue operations
|
[
{
"docid": "e5a9886927ce33ddd8a0c9a1273c297f",
"text": "Recent advances in the field of Unmanned Aerial Vehicles (UAVs) make flying robots suitable platforms for carrying sensors and computer systems capable of performing advanced tasks. This paper presents a technique which allows detecting humans at a high frame rate on standard hardware onboard an autonomous UAV in a real-world outdoor environment using thermal and color imagery. Detected human positions are geolocated and a map of points of interest is built. Such a saliency map can, for example, be used to plan medical supply delivery during a disaster relief effort. The technique has been implemented and tested on-board the UAVTech1 autonomous unmanned helicopter platform as a part of a complete autonomous mission. The results of flight- tests are presented and performance and limitations of the technique are discussed.",
"title": ""
}
] |
[
{
"docid": "5eab71f546a7dc8bae157a0ca4dd7444",
"text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.",
"title": ""
},
{
"docid": "51d15ba34f93e0b589d4039226ad2d19",
"text": "Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks' back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps' detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies.",
"title": ""
},
{
"docid": "f36b101aa059792e21281bff8157568f",
"text": "Many research projects oriented on control mechanisms of virtual agents in videogames have emerged in recent years. However, this boost has not been accompanied with the emergence of toolkits supporting development of these projects, slowing down the progress in the field. Here, we present Pogamut 3, an open source platform for rapid development of behaviour for virtual agents embodied in a 3D environment of the Unreal Tournament 2004 videogame. Pogamut 3 is designed to support research as well as educational projects. The paper also briefly touches extensions of Pogamut 3; the ACT-R integration, the emotional model ALMA integration, support for control of avatars at the level of gestures, and a toolkit for developing educational scenarios concerning orientation in urban areas. These extensions make Pogamut 3 applicable beyond the domain of computer games.",
"title": ""
},
{
"docid": "628c8b906e3db854ea92c021bb274a61",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "025e76755193277b2ea55d06d4f22d03",
"text": "Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a “bioink” to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions.",
"title": ""
},
{
"docid": "3510615d09b9cc7cf3be154d50da7e27",
"text": "We propose a non-parametric model for pedestrian motion based on Gaussian Process regression, in which trajectory data are modelled by regressing relative motion against current position. We show how the underlying model can be learned in an unsupervised fashion, demonstrating this on two databases collected from static surveillance cameras. We furthermore exemplify the use of model for prediction, comparing the recently proposed GP-Bayesfilters with a Monte Carlo method. We illustrate the benefit of this approach for long term motion prediction where parametric models such as Kalman Filters would perform poorly.",
"title": ""
},
{
"docid": "cb59c880b3848b7518264f305cfea32a",
"text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.",
"title": ""
},
{
"docid": "49a53a8cb649c93d685e832575acdb28",
"text": "We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.",
"title": ""
},
{
"docid": "cb2d8e7b01de6cdb5a303a38cc11e211",
"text": "Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals.\n In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations.",
"title": ""
},
{
"docid": "d0dd13964de87acab0f7fe76585d0bbf",
"text": "The continual growth of electronic medical record (EMR) databases has paved the way for many data mining applications, including the discovery of novel disease-drug associations and the prediction of patient survival rates. However, these tasks are hindered because EMRs are usually segmented or incomplete. EMR analysis is further limited by the overabundance of medical term synonyms and morphologies, which causes existing techniques to mismatch records containing semantically similar but lexically distinct terms. Current solutions fill in missing values with techniques that tend to introduce noise rather than reduce it. In this paper, we propose to simultaneously infer missing data and solve semantic mismatching in EMRs by first integrating EMR data with molecular interaction networks and domain knowledge to build the HEMnet, a heterogeneous medical information network. We then project this network onto a low-dimensional space, and group entities in the network according to their relative distances. Lastly, we use this entity distance information to enrich the original EMRs. We evaluate the effectiveness of this method according to its ability to separate patients with dissimilar survival functions. We show that our method can obtain significant (p-value < 0.01) results for each cancer subtype in a lung cancer dataset, while the baselines cannot.",
"title": ""
},
{
"docid": "0af8bbdda9482f24dfdfc41046382e1b",
"text": "In this paper, we have examined the effectiveness of \"style matrix\" which is used in the works on style transfer and texture synthesis by Gatys et al. in the context of image retrieval as image features. A style matrix is presented by Gram matrix of the feature maps in a deep convolutional neural network. We proposed a style vector which are generated from a style matrix with PCA dimension reduction. In the experiments, we evaluate image retrieval performance using artistic images downloaded from Wikiarts.org regarding both artistic styles ans artists. We have obtained 40.64% and 70.40% average precision for style search and artist search, respectively, both of which outperformed the results by common CNN features. In addition, we found PCA-compression boosted the performance.",
"title": ""
},
{
"docid": "4d297680cd342f46a5a706c4969273b8",
"text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.",
"title": ""
},
{
"docid": "36a694668a10bc0475f447adb1e09757",
"text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.",
"title": ""
},
{
"docid": "03550fad9c5f21c69253f2bfc389fccc",
"text": "The design of a Ka dual-band circular polarizer by inserting a dielectric septum in the middle of the circular waveguide is discussed here. The dielectric septum is located in fixing slots, and by adjusting the dimension of the dual-compensation slots which are built in the orthogonal plane, the phase difference of 90deg at the center frequency for the dual-band can be achieved. Furthermore, the gradual changing structures at both ends of the dielectric septum are built for impedance matching for both Ex and Ey polarizations. The simple structure of this kind of polarizer can reduce the influence of manufacturing inaccuracy in the Ka-band. The measured phase difference is within 90degplusmn 4.5deg for both bands. In addition, the return losses for both Ex and Ey polarizations are better than -15 dB.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "03eb1360ba9e3e38f082099ed08469ed",
"text": "In this paper some concept of fuzzy set have discussed and one fuzzy model have applied on agricultural farm for optimal allocation of different crops by considering maximization of net benefit, production and utilization of labour . Crisp values of the objective functions obtained from selected nondominated solutions are converted into triangular fuzzy numbers and ranking of those fuzzy numbers are done to make a decision. .",
"title": ""
},
{
"docid": "0742dcc602a216e41d3bfe47bffc7d30",
"text": "In this paper we study supervised and semi-supervised classification of e-mails. We consider two tasks: filing e-mails into folders and spam e-mail filtering. Firstly, in a supervised learning setting, we investigate the use of random forest for automatic e-mail filing into folders and spam e-mail filtering. We show that random forest is a good choice for these tasks as it runs fast on large and high dimensional databases, is easy to tune and is highly accurate, outperforming popular algorithms such as decision trees, support vector machines and naïve Bayes. We introduce a new accurate feature selector with linear time complexity. Secondly, we examine the applicability of the semi-supervised co-training paradigm for spam e-mail filtering by employing random forests, support vector machines, decision tree and naïve Bayes as base classifiers. The study shows that a classifier trained on a small set of labelled examples can be successfully boosted using unlabelled examples to accuracy rate of only 5% lower than a classifier trained on all labelled examples. We investigate the performance of co-training with one natural feature split and show that in the domain of spam e-mail filtering it can be as competitive as co-training with two natural feature splits.",
"title": ""
},
{
"docid": "b857bb7ceb60057991f45d1f2ce8453e",
"text": "We present DisCo, a novel display-camera communication system. DisCo enables displays and cameras to communicate with each other while also displaying and capturing images for human consumption. Messages are transmitted by temporally modulating the display brightness at high frequencies so that they are imperceptible to humans. Messages are received by a rolling shutter camera that converts the temporally modulated incident light into a spatial flicker pattern. In the captured image, the flicker pattern is superimposed on the pattern shown on the display. The flicker and the display pattern are separated by capturing two images with different exposures. The proposed system performs robustly in challenging real-world situations such as occlusion, variable display size, defocus blur, perspective distortion, and camera rotation. Unlike several existing visible light communication methods, DisCo works with off-the-shelf image sensors. It is compatible with a variety of sources (including displays, single LEDs), as well as reflective surfaces illuminated with light sources. We have built hardware prototypes that demonstrate DisCo’s performance in several scenarios. Because of its robustness, speed, ease of use, and generality, DisCo can be widely deployed in several applications, such as advertising, pairing of displays with cell phones, tagging objects in stores and museums, and indoor navigation.",
"title": ""
},
{
"docid": "680d755a3a6d8fcd926eb441fad5aa57",
"text": "DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems.\nIn this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multi-variate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes, and for providing clear methodologies for learning from (noisy) observations.\nWe start by showing how Bayesian networks can describe interactions between genes. We then present an efficient algorithm capable of learning such networks and statistical method to assess our confidence in their features. Finally, we apply this method to the S. cerevisiae cell-cycle measurements of Spellman et al. [35] to uncover biological features",
"title": ""
}
] |
scidocsrr
|
78e8c84ac1fa523d6a3688fc58456aea
|
Overview of the CL-SciSumm 2016 Shared Task
|
[
{
"docid": "eb6da64fe7dffde7fbc0a2520b435c87",
"text": "In this paper, we present our system addressing Task 1 of CL-SciSumm Shared Task at BIRNDL 2016. Our system makes use of lexical and syntactic dependency cues, and applies rule-based approach to extract text spans in the Reference Paper that accurately reflect the citances. Further, we make use of lexical cues to identify discourse facets of the paper to which cited text belongs. The lexical and syntactic cues are obtained on pre-processed text of the citances, and the reference paper. We report our results obtained for development set using our system for identifying reference scope of citances in this paper.",
"title": ""
}
] |
[
{
"docid": "366cd2f9b48715c0a987a0f77093a780",
"text": "Work processes involving dozens or hundreds of collaborators are complex and difficult to manage. Problems within the process may have severe organizational and financial consequences. Visualization helps monitor and analyze those processes. In this paper, we study the development of large software systems as an example of a complex work process. We introduce Developer Rivers, a timeline-based visualization technique that shows how developers work on software modules. The flow of developers' activity is visualized by a river metaphor: activities are transferred between modules represented as rivers. Interactively switching between hierarchically organized modules and workload metrics allows for exploring multiple facets of the work process. We study typical development patterns by applying our visualization to Python and the Linux kernel.",
"title": ""
},
{
"docid": "6ae739344034410a570b12a57db426e3",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "e1103ac7367206c5fb74d227c114e848",
"text": "Recently, subjectivity and sentiment analysis of Arabic has received much attention from the research community. In the past two years, an enormous number of references in the field have emerged compared to what has been published in previous years. In this paper, we present an updated survey of the emerging research on subjectivity and sentiment analysis of Arabic. We also highlight the challenges and future research directions in this field.",
"title": ""
},
{
"docid": "27b2f82780c4113bb8a234cac0cf38f9",
"text": "Conventional robot manipulators have singularities in their workspaces and constrained spatial movements. Flexible and soft robots provide a unique solution to overcome this limitation. Flexible robot arms have biologically inspired characteristics as flexible limbs and redundant degrees of freedom. From these special characteristics, flexible manipulators are able to develop abilities such as bend, stretch and adjusting stiffness to traverse a complex maze. Many researchers are working to improve capabilities of flexible arms by improving the number of degrees of freedoms and their methodologies. The proposed flexible robot arm is composed of multiple sections and each section contains three similar segments and a base segment. These segments act as the backbone of the basic structure and each section can be controlled by changing the length of three control wires. These control wires pass through each segment and are held in place by springs. This design provides each segment with 2 DOF. The proposed system single section can be bent 90° with respective to its centre axis. Kinematics of the flexible robot is derived with respect to the base segment.",
"title": ""
},
{
"docid": "9b74a6c2e165a75b202e2aa4df439f17",
"text": "State-of-the-art object recognition Convolutional Neural Networks (CNNs) are shown to be fooled by image agnostic perturbations, called universal adversarial perturbations. It is also observed that these perturbations generalize across multiple networks trained on the same target data. However, these algorithms require training data on which the CNNs were trained and compute adversarial perturbations via complex optimization. The fooling performance of these approaches is directly proportional to the amount of available training data. This makes them unsuitable for practical attacks since its unreasonable for an attacker to have access to the training data. In this paper, for the first time, we propose a novel data independent approach to generate image agnostic perturbations for a range of CNNs trained for object recognition. We further show that these perturbations are transferable across multiple network architectures trained either on same or different data. In the absence of data, our method generates universal perturbations efficiently via fooling the features learned at multiple layers thereby causing CNNs to misclassify. Experiments demonstrate impressive fooling rates and surprising transferability for the proposed universal perturbations generated without any training data.",
"title": ""
},
{
"docid": "5ed0b80a7b9da6e9d7c87bf8d12e5373",
"text": "Light-field cameras are now used in consumer and industrial applications. Recent papers and products have demonstrated practical depth recovery algorithms from a passive single-shot capture. However, current light-field capture devices have narrow baselines and constrained spatial resolution; therefore, the accuracy of depth recovery is limited, requiring heavy regularization and producing planar depths that do not resemble the actual geometry. Using shading information is essential to improve the shape estimation. We develop an improved technique for local shape estimation from defocus and correspondence cues, and show how shading can be used to further refine the depth. Light-field cameras are able to capture both spatial and angular data, suitable for refocusing. By locally refocusing each spatial pixel to its respective estimated depth, we produce an all-in-focus image where all viewpoints converge onto a point in the scene. Therefore, the angular pixels have angular coherence, which exhibits three properties: photo consistency, depth consistency, and shading consistency. We propose a new framework that uses angular coherence to optimize depth and shading. The optimization framework estimates both general lighting in natural scenes and shading to improve depth regularization. Our method outperforms current state-of-the-art light-field depth estimation algorithms in multiple scenarios, including real images.",
"title": ""
},
{
"docid": "c0890c01e51ddedf881cd3d110efa6e2",
"text": "A residual networks family with hundreds or even thousands of layers dominates major image recognition tasks, but building a network by simply stacking residual blocks inevitably limits its optimization ability. This paper proposes a novel residual network architecture, residual networks of residual networks (RoR), to dig the optimization ability of residual networks. RoR substitutes optimizing residual mapping of residual mapping for optimizing original residual mapping. In particular, RoR adds levelwise shortcut connections upon original residual networks to promote the learning capability of residual networks. More importantly, RoR can be applied to various kinds of residual networks (ResNets, Pre-ResNets, and WRN) and significantly boost their performance. Our experiments demonstrate the effectiveness and versatility of RoR, where it achieves the best performance in all residual-network-like structures. Our RoR-3-WRN58-4 + SD models achieve new state-of-the-art results on CIFAR-10, CIFAR-100, and SVHN, with the test errors of 3.77%, 19.73%, and 1.59%, respectively. RoR-3 models also achieve state-of-the-art results compared with ResNets on the ImageNet data set.",
"title": ""
},
{
"docid": "1fc10d626c7a06112a613f223391de26",
"text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …",
"title": ""
},
{
"docid": "4349d7307567efa5297a4bdd91723336",
"text": "Smartphones act as mobile entertainment units where a user can: watch videos, listen to music, update blogs, as well as audio and video blogging. The aim of this study was to review the impact of smartphones on academic performance of students in higher learning institutions. Intensive literature review was done finding out the disadvantages and advantages brought by smartphones in academic arena. In the future, research will be conducted at Ruaha Catholic University to find out whether students are benefiting from using smartphones in their daily studies and whether do they affect their GPA at the end of the year. Keywords— Smartphones, Academic performance, higher learning students, Addictions, GPA, RUCU.",
"title": ""
},
{
"docid": "39ce21cf294147475b9bfe48851dcebe",
"text": "In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet’s learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.",
"title": ""
},
{
"docid": "22a8e467c97ffa7896d7fbbe700debbb",
"text": "Automated detection and 3D modelling of objects in laser range data is of great importance in many app lications. Existing approaches to object detection in range data are li mited to either 2.5D data (e.g. range images) or si mple objects with a parametric form (e.g. spheres). This paper describes a new app ro ch to the detection of 3D objects with arbitrary shapes in a point cloud. We present an extension of the generalized Hough trans form to 3D data, which can be used to detect instan ces of an object model in laser range data, independent of the scale and orientatio of the object. We also discuss the computational complexity of the method and provide cost-reduction strategies that can be emplo yed to improve the efficiency of the method.",
"title": ""
},
{
"docid": "44a5ea6fee136e66e1d89fb681f84805",
"text": "The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.",
"title": ""
},
{
"docid": "4ef75fc674260d18682c23b665a99cbe",
"text": "FtsH proteins have dual chaperone-protease activities and are involved in protein quality control under stress conditions. Although the functional role of FtsH proteins has been clearly established, the regulatory mechanisms controlling ftsH expression in gram-positive bacteria remain largely unknown. Here we show that ftsH of Lactobacillus plantarum WCFS1 is transiently induced at the transcriptional level upon a temperature upshift. In addition, disruption of ftsH negatively affected the growth of L. plantarum at high temperatures. Sequence analysis and mapping of the ftsH transcriptional start site revealed a potential operator sequence for the CtsR repressor, partially overlapping the -35 sequence of the ftsH promoter. In order to verify whether CtsR is able to recognize and bind the ftsH promoter, CtsR proteins of Bacillus subtilis and L. plantarum were overproduced, purified, and used in DNA binding assays. CtsR from both species bound specifically to the ftsH promoter, generating a single protein-DNA complex, suggesting that CtsR may control the expression of L. plantarum ftsH. In order to confirm this hypothesis, a DeltactsR mutant strain of L. plantarum was generated. Expression of ftsH in the DeltactsR mutant strain was strongly upregulated, indicating that ftsH of L. plantarum is negatively controlled by CtsR. This is the first example of an ftsH gene controlled by the CtsR repressor, and the first of the low-G+C gram-positive bacteria where the regulatory mechanism has been identified.",
"title": ""
},
{
"docid": "132ea515f14987f28ed1ace699260d85",
"text": "In this paper we propose, describe and evaluate the novel motion capture (MoCap) data averaging framework. It incorporates hierarchical kinematic model, angle coordinates’ preprocessing methods, that recalculate the original MoCap recording making it applicable for further averaging algorithms, and finally signals averaging processing. We have tested two signal averaging methods namely Kalman Filter (KF) and Dynamic Time Warping barycenter averaging (DBA). The propose methods have been tested on MoCap recordings of elite Karate athlete, multiple champion of Oyama karate knockdown kumite who performed 28 different karate techniques repeated 10 times each. The proposed methods proved to have not only high effectiveness measured with root-mean-square deviation (4.04 ± 5.03 degrees for KF and 5.57 ± 6.27 for DBA) and normalized Dynamic Time Warping distance (0.90 ± 1.58 degrees for KF and 0.93 ± 1.23 for DBA), but also the reconstruction and visualization of those recordings persists all crucial aspects of those complicated actions. The proposed methodology has many important applications in classification, clustering, kinematic analysis and coaching. Our approach generates an averaged full body motion template that can be practically used for example for human actions recognition. In order to prove it we have evaluated templates generated by our method in human action classification tasks using DTW classifier. We have made two experiments. In first leave - one - out cross - validation we have obtained 100% correct recognitions. In second experiment when we classified recordings of one person using templates of another recognition rate 94.2% was obtained.",
"title": ""
},
{
"docid": "9507febd41296b63e8a6434eb27400f9",
"text": "This paper presents a new approach for automatic concept extraction, using grammatical parsers and Latent Semantic Analysis. The methodology is described, also the tool used to build the benchmarkingcorpus. The results obtained on student essays shows good inter-rater agreement and promising machine extraction performance. Concept extraction is the first step to automatically extract concept maps fromstudent’s essays or Concept Map Mining.",
"title": ""
},
{
"docid": "a89fe7e741003b873ecab38bf7c7c3fb",
"text": "Commercially available glucose measurement device for diabetes monitoring require extracting of blood and this means there will be a physical contact with human body. Demand on non-invasive measurement has invites research and development of new detection methods to measure blood glucose level. In this work, a very sensitive optical polarimetry measurement technique using ratio-metric photon counting detection has been introduced and tested for a range of known glucose concentrations that mimic the level of glucose in human blood. The setup utilizes 785nm diode laser that emits weak coherent optical signal onto glucose concentration samples in aqueous. The result shows a linear proportional of different glucose concentration and successfully detected 10260 mg/dl to 260 mg/dl glucose samples. This indicates a potential improvement method for non-invasive glucose measurement by a sensitive polarimetry based optical sensor in single photon level for biomedical applications.",
"title": ""
},
{
"docid": "7aaa535e1294e9bcce7d0d40caff626e",
"text": "Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entitytype consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction.",
"title": ""
},
{
"docid": "1eda9ea5678debcc886c996162fa475c",
"text": "The main purpose of the study is to examine the impact of parent’s occupation and family income on children performance. For this study a survey was conducted in Southern Punjab. The sample of 15oo parents were collected through a questionnaire using probability sampling technique that is Simple Random Sampling. All the analysis has been carried out on SPSS (Statistical Package for the Social Sciences). Chisquare test is applied to test the effect of parent’s occupation and family income on children’s performance. The results of the study specify that parent’soccupation and family incomehave significant impact on children’s performance.Parents play an important role in child development. Parents with good economic status provide better facilities to their children, results in better performance of the children.",
"title": ""
},
{
"docid": "887e16278cfac025c15655375a72b65c",
"text": "The classification of tweets into polarity classes is a popular task in sentiment analysis. State-of-the-art solutions to this problem are based on supervised machine learning models trained from manually annotated examples. A drawback of these approaches is the high cost involved in data annotation. Two freely available resources that can be exploited to solve the problem are: 1) large amounts of unlabelled tweets obtained from the Twitter API and 2) prior lexical knowledge in the form of opinion lexicons. In this paper, we propose Annotate-Sample-Average (ASA), a distant supervision method that uses these two resources to generate synthetic training data for Twitter polarity classification. Positive and negative training instances are generated by sampling and averaging unlabelled tweets containing words with the corresponding polarity. Polarity of words is determined from a given polarity lexicon. Our experimental results show that the training data generated by ASA (after tuning its parameters) produces a classifier that performs significantly better than a classifier trained from tweets annotated with emoticons and a classifier trained, without any sampling and averaging, from tweets annotated according to the polarity of their words.",
"title": ""
}
] |
scidocsrr
|
3675b67fd4e37f788dd02f44e921939e
|
Overview of the NLPCC-ICCPOL 2016 Shared Task: Chinese Word Similarity Measurement
|
[
{
"docid": "502abb9980735a090a2f2a8b7510af9b",
"text": "This paper presents and compares WordNetbased and distributional similarity approaches. The strengths and weaknesses of each approach regarding similarity and relatedness tasks are discussed, and a combination is presented. Each of our methods independently provide the best results in their class on the RG and WordSim353 datasets, and a supervised combination of them yields the best published results on all datasets. Finally, we pioneer cross-lingual similarity, showing that our methods are easily adapted for a cross-lingual task with minor losses.",
"title": ""
}
] |
[
{
"docid": "97a7ebf3cffa55f97e28ca42d1239131",
"text": "The eeect of selecting varying numbers and kinds of features for use in predicting category membership was investigated on the Reuters and MUC-3 text categorization data sets. Good categorization performance was achieved using a statistical classiier and a proportional assignment strategy. The optimal feature set size for word-based indexing was found to be surprisingly low (10 to 15 features) despite the large training sets. The extraction of new text features by syntactic analysis and feature clustering was investigated on the Reuters data set. Syntactic indexing phrases, clusters of these phrases, and clusters of words were all found to provide less eeective representations than individual words.",
"title": ""
},
{
"docid": "8f4f687aff724496efcc37ff7f6bbbeb",
"text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.",
"title": ""
},
{
"docid": "21eddfd81b640fc1810723e93f94ae5d",
"text": "R. B. Gnanajothi, Topics in graph theory, Ph. D. thesis, Madurai Kamaraj University, India, 1991. E. M. Badr, On the Odd Gracefulness of Cyclic Snakes With Pendant Edges, International journal on applications of graph theory in wireless ad hoc networks and sensor networks (GRAPH-HOC) Vol. 4, No. 4, December 2012. E. M. Badr, M. I. Moussa & K. Kathiresan (2011): Crown graphs and subdivision of ladders are odd graceful, International Journal of Computer Mathematics, 88:17, 3570-3576. A. Rosa, On certain valuation of the vertices of a graph, Theory of Graphs (International Symposium, Rome, July 1966), Gordon and Breach, New York and Dunod Paris (1967) 349-355. A. Solairaju & P. Muruganantham, Even Vertex Gracefulness of Fan Graph,",
"title": ""
},
{
"docid": "c294a7817e456736135357484f9141ed",
"text": "Obesity continues to be one of the major public health problems due to its high prevalence and co-morbidities. Common co-morbidities not only include cardiometabolic disorders but also mood and cognitive disorders. Obese subjects often show deficits in memory, learning and executive functions compared to normal weight subjects. Epidemiological studies also indicate that obesity is associated with a higher risk of developing depression and anxiety, and vice versa. These associations between pathologies that presumably have different etiologies suggest shared pathological mechanisms. Gut microbiota is a mediating factor between the environmental pressures (e.g., diet, lifestyle) and host physiology, and its alteration could partly explain the cross-link between those pathologies. Westernized dietary patterns are known to be a major cause of the obesity epidemic, which also promotes a dysbiotic drift in the gut microbiota; this, in turn, seems to contribute to obesity-related complications. Experimental studies in animal models and, to a lesser extent, in humans suggest that the obesity-associated microbiota may contribute to the endocrine, neurochemical and inflammatory alterations underlying obesity and its comorbidities. These include dysregulation of the HPA-axis with overproduction of glucocorticoids, alterations in levels of neuroactive metabolites (e.g., neurotransmitters, short-chain fatty acids) and activation of a pro-inflammatory milieu that can cause neuro-inflammation. This review updates current knowledge about the role and mode of action of the gut microbiota in the cross-link between energy metabolism, mood and cognitive function.",
"title": ""
},
{
"docid": "e84b6bbb2eaee0edb6ac65d585056448",
"text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.",
"title": ""
},
{
"docid": "aca5ad6b3bbd9b52058cde1a71777202",
"text": "Despite its high incidence and the great development of literature, there is still controversy about the optimal management of Achilles tendon rupture. The several techniques proposed to treat acute ruptures can essentially be classifi ed into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair with or without augmentation. Although chronic ruptures represent a different chapter, the ideal treatment seems to be surgical too (debridement, local tissue transfer, augmentation and synthetic grafts). In this paper we reviewed the literature on acute injuries. Review Article Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Alessandro Bistolfi , Jessica Zanovello, Elisa Lioce, Lorenzo Morino, Raul Cerlon, Alessandro Aprato* and Giuseppe Massazza Medical school, University of Turin, Turin, Italy *Address for Correspondence: Alessandro Aprato, Medical School, University of Turin, Viale 25 Aprile 137 int 6 10131 Torino, Italy, Tel: +39 338 6880640; Email: ale_aprato@hotmail.com Submitted: 03 January 2017 Approved: 13 February 2017 Published: 21 February 2017 Copyright: 2017 Bistolfi A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite this article: Bistolfi A, Zanovello J, Lioce E, Morino L, Cerlon R, et al. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation. J Nov Physiother Rehabil. 2017; 1: 039-053. https://doi.org/10.29328/journal.jnpr.1001006 INTRODUCTION The Achilles is the strongest and the largest tendon in the body and it can normally withstand several times a subject’s body weight. Achilles tendon rupture is frequent and it has been shown to cause signi icant morbidity and, regardless of treatment, major functional de icits persist 1 year after acute Achilles tendon rupture [1] and only 50-60% of elite athletes return to pre-injury levels following the rupture [2]. Most Achilles tendon rupture is promptly diagnosed, but at irst exam physicians may miss up to 20% of these lesions [3]. The de inition of an old, chronic or neglected rupture is variable: the most used timeframe is 4 to 10 weeks [4]. The diagnosis of chronic rupture can be more dif icult because the gap palpable in acute ruptures is no longer present and it has been replaced by ibrous scar tissue. Typically chronic rupture occur 2 to 6 cm above the calcaneal insertion with extensive scar tissue deposition between the retracted tendon stumps [5], and the blood supply to this area is poor. In this lesion the tendon end usually has been retracted so the management must be different from the acute lesion’s one. Despite its high incidence and the great development of literature about this topic, there is still controversy about the optimal management of Achilles tendon rupture [6]. The several techniques proposed to treat acute ruptures can essentially be classi ied into: conservative management (cast immobilization or functional bracing), open repair, minimally invasive technique and percutaneous repair [7] with or without augmentation. Chronic ruptures represent a different chapter and the ideal treatment seems to be surgical [3]: the techniques frequently used are debridement, local tissue transfer, augmentation and synthetic grafts [8]. Achilles Tendon Injuries: Comparison of Different Conservative and Surgical Treatment and Rehabilitation Published: February 21, 2017 040 Conservative treatment using a short leg resting cast in an equinus position is probably justi ied for elderly patients who have lower functional requirements or increased risk of surgical healing, such as individuals with diabetes mellitus or in treatment with immunosuppressive drugs. In the conservative treatment, traditionally the ankle is immobilized in maximal plantar lexion, so as to re-approximate the two stumps, and a cast is worn to enable the tendon tissue to undergo biological repair. Advantages include the avoidance of surgical complications [9-11] and hospitalization, and the cost minimization. However, conservative treatment is associated with high rate of tendon re-rupture (up to 20%) [12]. Operative treatment can ensure tendon approximation and improve healing, and thus leads to a lower re-rupture rate (about 2-5%). However, complications such as wound infections, skin tethering, sural nerve damage and hypertrophic scar have been reported to range up to 34% [13]. The clinically most commonly used suture techniques for ruptured Achilles tendon are the Bunnell [14,15] and Kessler techniques [16-18]. Minimally invasive surgical techniques (using limited incisions or percutaneous techniques) are considered to reduce the risk of operative complications and appear successful in preventing re-rupture in cohort studies [19,20]. Ma and Grif ith originally described the percutaneous repair, which is a closed procedure performed under local anesthesia using various surgical techniques and instruments. The advantages in this technique are reduced rate of complications such as infections, nerve lesions or re-ruptures [21]. The surgical repair of a rupture of the Achilles tendon with the AchillonTM device and immediate weight-bearing has shown fewer complications and faster rehabilitation [22]. A thoughtful, comprehensive and responsive rehabilitation program is necessary after the operative treatment of acute Achilles lesions. First of all, the purposes of the rehabilitation program are to obtain a reduction of pain and swelling; secondly, progress toward the gradual recovery of ankle motion and power; lastly, the restoration of coordinated activity and safe return to daily life and athletic activity [23]. An important point to considerer is the immediate postoperative management, which includes immobilization of the ankle and limited or prohibited weight-bearing [24].",
"title": ""
},
{
"docid": "5a5b30b63944b92b168de7c17d5cdc5e",
"text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.",
"title": ""
},
{
"docid": "c9e5a1b9c18718cc20344837e10b08f7",
"text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.",
"title": ""
},
{
"docid": "94229bd589a99a6a6b4691e4778b28fc",
"text": "Commercially available software components come with the built-in functionality often offering end-user more than they need. A fact that end-user has no or very little influence on component’s functionality promoted nonfunctional requirements which are getting more attention than ever before. In this paper, we identify some of the problems encountered when non-functional requirements for COTS software components need to be defined.",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
},
{
"docid": "35dbef4cc4b8588d451008b8156f326f",
"text": "Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.",
"title": ""
},
{
"docid": "a78caf89bb51dca3a8a95f7736ae1b2b",
"text": "The understanding of sentences involves not only the retrieval of the meaning of single words, but the identification of the relation between a verb and its arguments. The way the brain manages to process word meaning and syntactic relations during language comprehension on-line still is a matter of debate. Here we review the different views discussed in the literature and report data from crucial experiments investigating the temporal and neurotopological parameters of different information types encoded in verbs, i.e. word category information, the verb's argument structure information, the verb's selectional restriction and the morphosyntactic information encoded in the verb's inflection. The neurophysiological indices of the processes dealing with these different information types suggest an initial independence of the processing of word category information from other information types as the basis of local phrase structure building, and a later processing stage during which different information types interact. The relative ordering of the subprocesses appears to be universal, whereas the absolute timing of when during later phrases interaction takes places varies as a function of when the relevant information becomes available. Moreover, the neurophysiological indices for non-local dependency relations vary as a function of the morphological richness of the language.",
"title": ""
},
{
"docid": "70374d2cbf730fab13c3e126359b59e8",
"text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.",
"title": ""
},
{
"docid": "38012834c3e533adad68fb0d8377f7db",
"text": "Undersampling the k -space data is widely adopted for acceleration of Magnetic Resonance Imaging (MRI). Current deep learning based approaches for supervised learning of MRI image reconstruction employ real-valued operations and representations by treating complex valued k-space/spatial-space as real values. In this paper, we propose complex dense fully convolutional neural network (CDFNet) for learning to de-alias the reconstruction artifacts within undersampled MRI images. We fashioned a densely-connected fully convolutional block tailored for complex-valued inputs by introducing dedicated layers such as complex convolution, batch normalization, non-linearities etc. CDFNet leverages the inherently complex-valued nature of input k -space and learns richer representations. We demonstrate improved perceptual quality and recovery of anatomical structures through CDFNet in contrast to its realvalued counterparts.",
"title": ""
},
{
"docid": "0fb9b4577da65280e664eee48a76fd3a",
"text": "We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery.",
"title": ""
},
{
"docid": "97b212bb8fde4859e368941a4e84ba90",
"text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.",
"title": ""
},
{
"docid": "9b2cd501685570f1d27394372cce0103",
"text": "We present a transceiver chipset consisting of a four channel receiver (Rx) and a single-channel transmitter (Tx) designed in a 200-GHz SiGe BiCMOS technology. Each Rx channel has a conversion gain of 19 dB with a typical single sideband noise figure of 10 dB at 1-MHz offset. The Tx includes two exclusively-enabled voltage-controlled oscillators on the same die to switch between two bands at 76-77 and 77-81 GHz. The phase noise is -97 dBc/Hz at 1-MHz offset. On-wafer, the output power is 2 × 13 dBm. At 3.3-V supply, the Rx chip draws 240 mA, while the Tx draws 530 mA. The power dissipation for the complete chipset is 2.5 W. The two chips are used as vehicles for a 77-GHz package test. The chips are packaged using the redistribution chip package technology. We compare on-wafer measurements with on-board results. The loss at the RF port due to the transition in the package results to be less than 1 dB at 77 GHz. The results demonstrate an excellent potential of the presented millimeter-wave package concept for millimeter-wave applications.",
"title": ""
},
{
"docid": "c1af668bdeeda5871e3bc6a602f022e6",
"text": "Within the parallel computing domain, field programmable gate arrays (FPGA) are no longer restricted to their traditional role as substitutes for application-specific integrated circuits-as hardware \"hidden\" from the end user. Several high performance computing vendors offer parallel re configurable computers employing user-programmable FPGAs. These exciting new architectures allow end-users to, in effect, create reconfigurable coprocessors targeting the computationally intensive parts of each problem. The increased capability of contemporary FPGAs coupled with the embarrassingly parallel nature of the Jacobi iterative method make the Jacobi method an ideal candidate for hardware acceleration. This paper introduces a parameterized design for a deeply pipelined, highly parallelized IEEE 64-bit floating-point version of the Jacobi method. A Jacobi circuit is implemented using a Xilinx Virtex-II Pro as the target FPGA device. Implementation statistics and performance estimates are presented.",
"title": ""
},
{
"docid": "2fe5a40499012640b3b4d18b134b3b7e",
"text": "Hollywood has often been called the land of hunches and wild guesses. The uncertainty associated with the predictability of product demand makes the movie business a risky endeavor. Therefore, predicting the box-office receipts of a particular motion picture has intrigued many scholars and industry leaders as a difficult and challenging problem. In this study, with a rather large and feature rich dataset, we explored the use of data mining methods (e.g., artificial neural networks, decision trees and support vector machines along with information fusion based ensembles) to predict the financial performance of a movie at the box-office before its theatrical release. In our prediction models, we have converted the forecasting problem into a classification problem—rather than forecasting the point estimate of box-office receipts; we classified a movie (based on its box-office receipts) into nine categories, ranging from a “flop” to a “blockbuster.” Herein we present our exciting prediction results where we compared individual models to those of the ensamples.",
"title": ""
},
{
"docid": "ada79ede490e8427f542d85a2ea5266b",
"text": "We present QUINT, a live system for question answering over knowledge bases. QUINT automatically learns role-aligned utterance-query templates from user questions paired with their answers. When QUINT answers a question, it visualizes the complete derivation sequence from the natural language utterance to the final answer. The derivation provides an explanation of how the syntactic structure of the question was used to derive the structure of a SPARQL query, and how the phrases in the question were used to instantiate different parts of the query. When an answer seems unsatisfactory, the derivation provides valuable insights towards reformulating the question.",
"title": ""
}
] |
scidocsrr
|
c67c4c835030ccf135395648b6091073
|
An Empirical Comparison of Four Text Mining Methods
|
[
{
"docid": "d319a17ad2fa46e0278e0b0f51832f4b",
"text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.",
"title": ""
},
{
"docid": "0f58d491e74620f43df12ba0ec19cda8",
"text": "Latent Dirichlet allocation (LDA) (Blei, Ng, Jordan 2003) is a fully generative statistical language model on the content and topics of a corpus of documents. In this paper we apply a modification of LDA, the novel multi-corpus LDA technique for web spam classification. We create a bag-of-words document for every Web site and run LDA both on the corpus of sites labeled as spam and as non-spam. In this way collections of spam and non-spam topics are created in the training phase. In the test phase we take the union of these collections, and an unseen site is deemed spam if its total spam topic probability is above a threshold. As far as we know, this is the first web retrieval application of LDA. We test this method on the UK2007-WEBSPAM corpus, and reach a relative improvement of 11% in F-measure by a logistic regression based combination with strong link and content baseline classifiers.",
"title": ""
}
] |
[
{
"docid": "6d570aabfbf4f692fc36a0ef5151a469",
"text": "Background: Balance is a component of basic needs for daily activities and it plays an important role in static and dynamic activities. Core stabilization training is thought to improve balance, postural control, and reduce the risk of lower extremity injuries. The purpose of this study was to study the effect of core stabilizing program on balance in spastic diplegic cerebral palsy children. Subjects and Methods: Thirty diplegic cerebral palsy children from both sexes ranged in age from six to eight years participated in this study. They were assigned randomly into two groups of equal numbers, control group (A) children were received selective therapeutic exercises and study group (B) children were received selective therapeutic exercises plus core stabilizing program for eight weeks. Each patient of the two groups was evaluated before and after treatment by Biodex Balance System in laboratory of balance in faculty of physical therapy (antero posterior, medio lateral and overall stability). Patients in both groups received traditional physical therapy program for one hour per day and three sessions per week and group (B) were received core stabilizing program for eight weeks three times per week. Results: There was no significant difference between the two groups in all measured variables before wearing the orthosis (p>0.05), while there was significant difference when comparing pre and post mean values of all measured variables in each group (p<0.01). When comparing post mean values between both groups, the results revealed significant improvement in favor of group (B) (p<0.01). Conclusion: core stabilizing program is an effective therapeutic exercise to improve balance in diplegic cerebral palsy children.",
"title": ""
},
{
"docid": "ae8fde6c520fb4d1e18c4ff19d59a8d8",
"text": "Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.",
"title": ""
},
{
"docid": "30f4dfd49f1ba53f3a4786ae60da3186",
"text": "In order to improve the speed limitation of serial scrambler, we propose a new parallel scrambler architecture and circuit to overcome the limitation of serial scrambler. A very systematic parallel scrambler design methodology is first proposed. The critical path delay is only one D-register and one xor gate of two inputs. Thus, it is superior to other proposed circuits in high-speed applications. A new DET D-register with embedded xor operation is used as a basic circuit block of the parallel scrambler. Measurement results show the proposed parallel scrambler can operate in 40 Gbps with 16 outputs in TSMC 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "928ed1aed332846176ad52ce7cc0754c",
"text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (",
"title": ""
},
{
"docid": "dde4e45fd477808d40b3b06599d361ff",
"text": "In this paper, we present the basic features of the flight control of the SkySails towing kite system. After introducing the coordinate definitions and the basic system dynamics, we introduce a novel model used for controller design and justify its main dynamics with results from system identification based on numerous sea trials. We then present the controller design, which we successfully use for operational flights for several years. Finally, we explain the generation of dynamical flight patterns.",
"title": ""
},
{
"docid": "7a005d66591330d6fdea5ffa8cb9020a",
"text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.",
"title": ""
},
{
"docid": "309e14c07a3a340f7da15abeb527231d",
"text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.",
"title": ""
},
{
"docid": "37860036f1b9926a8d46d6542a6688f2",
"text": "A three-dimensional extended finite element method (X-FEM) coupled with a narrow band fast marching method (FMM) is developed and implemented in the Abaqus finite element package for curvilinear fatigue crack growth and life prediction analysis of metallic structures. Given the level set representation of arbitrary crack geometry, the narrow band FMM provides an efficient way to update the level set values of its evolving crack front. In order to capture the plasticity induced crack closure effect, an element partition and state recovery algorithm for dynamically allocated Gauss points is adopted for efficient integration of historical state variables in the near-tip plastic zone. An element-based penalty approach is also developed to model crack closure and friction. The proposed technique allows arbitrary insertion of initial cracks, independent of a base 3D model, and allows non-self-similar crack growth pattern without conforming to the existing mesh or local remeshing. Several validation examples are presented to demonstrate the extraction of accurate stress intensity factors for both static and growing cracks. Fatigue life prediction of a flawed helicopter lift frame under the ASTERIX spectrum load is presented to demonstrate the analysis procedure and capabilities of the method.",
"title": ""
},
{
"docid": "d040683d793e79732fb6c471f098a022",
"text": "In this work we address the issue of sustainable cities by focusing on one of their very central components: daily mobility. Indeed, if cities can be interpreted as spatial organizations allowing social interactions, the number of daily movements needed to reach this goal is continuously increasing. Therefore, improving urban accessibility merely results in increasing traffic and its negative externalities (congestion, accidents, pollution, noise, etc.), while eventually reducing the quality of life of people in the city. This is why several urban-transport policies are implemented in order to reduce individual mobility impacts while maintaining equitable access to the city. This challenge is however non-trivial and therefore we propose to investigate this issue from the complex systems point of view. The real spatial-temporal urban accessibility of citizens cannot be approximated just by focusing on space and implies taking into account the space-time activity patterns of individuals, in a more dynamic way. Thus, given the importance of local interactions in such a perspective, an agent based approach seems to be a relevant solution. This kind of individual based and “interactionist” approach allows us to explore the possible impact of individual behaviors on the overall dynamics of the city but also the possible impact of global measures on individual behaviors. In this paper, we give an overview of the Miro Project and then focus on the GaMiroD model design from real data analysis to model exploration tuned by transportation-oriented scenarios. Among them, we start with the the impact of a LEZ (Low Emission Zone) in the city center.",
"title": ""
},
{
"docid": "bd1a13c94d0e12b4ba9f14fef47d2564",
"text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "8e23dc265f4d48caae7a333db72d887e",
"text": "We introduce a new mechanism for rooting trust in a cloud computing environment called the Trusted Virtual Environment Module (TVEM). The TVEM helps solve the core security challenge of cloud computing by enabling parties to establish trust relationships where an information owner creates and runs a virtual environment on a platform owned by a separate service provider. The TVEM is a software appliance that provides enhanced features for cloud virtual environments over existing Trusted Platform Module virtualization techniques, which includes an improved application program interface, cryptographic algorithm flexibility, and a configurable modular architecture. We define a unique Trusted Environment Key that combines trust from the information owner and the service provider to create a dual root of trust for the TVEM that is distinct for every virtual environment and separate from the platform’s trust. This paper presents the requirements, design, and architecture of our approach.",
"title": ""
},
{
"docid": "5e7b7df188ab7983a7e364c50926c58c",
"text": "Dopamine-β-hydroxylase (DBH, EC 1.14.17.1) is an enzyme with implications in various neuropsychiatric and cardiovascular diseases and is a known drug target. There is a dearth of cost effective and fast method for estimation of activity of this enzyme. A sensitive UHPLC based method for the estimation of DBH activity in human sera samples based on separation of substrate tyramine from the product octopamine in 3 min is described here. In this newly developed protocol, a Solid Phase Extraction (SPE) sample purification step prior to LC separation, selectively removes interferences from the reaction cocktail with almost no additional burden on analyte recovery. The response was found to be linear with an r2 = 0.999. The coefficient of variation for assay precision was < 10% and recovery > 90%. As a proof of concept, DBH activity in sera from healthy human volunteers (n = 60) and schizophrenia subjects (n = 60) were successfully determined using this method. There was a significant decrease in sera DBH activity in subjects affected by schizophrenia (p < 0.05) as compared to healthy volunteers. This novel assay employing SPE to separate octopamine and tyramine from the cocktail matrix may have implications for categorising subjects into various risk groups for Schizophrenia, Parkinson’s disease as well as in high throughput screening of inhibitors.",
"title": ""
},
{
"docid": "58d2f5d181095fc59eaf9c7aa58405b0",
"text": "Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. A frequency domain smoothingsharpening technique is proposed and its impact is assessed to beneficially enhance mammogram images. This technique aims to gain the advantages of enhance and sharpening process that aims to highlight sudden changes in the image intensity, it is usually applied to remove random noise from digital images. The already developed technique also eliminates the drawbacks of each of the two sharpening and smoothing techniques resulting from their individual application in image processing field. The selection of parameters is almost invariant of the type of background tissues and severity of the abnormality, giving significantly improved results even for denser mammographic images. The proposed technique is tested breast X-ray mammograms. The simulated results show that the high potential to advantageously enhance the image contrast hence giving extra aid to radiologists to detect and classify mammograms of breast cancer. Keywords— Fourier transform, Gabor filter, Image, enhancement, Mammograms, Segmentation",
"title": ""
},
{
"docid": "d7a5eedd87637a266293595a6f2b924f",
"text": "Regular Expression (RE) matching has important applications in the areas of XML content distribution and network security. In this paper, we present the end-to-end design of a high performance RE matching system. Our system combines the processing efficiency of Deterministic Finite Automata (DFA) with the space efficiency of Non-deterministic Finite Automata (NFA) to scale to hundreds of REs. In experiments with real-life RE data on data streams, we found that a bulk of the DFA transitions are concentrated around a few DFA states. We exploit this fact to cache only the frequent core of each DFA in memory as opposed to the entire DFA (which may be exponential in size). Further, we cluster REs such that REs whose interactions cause an exponential increase in the number of states are assigned to separate groups -- this helps to improve cache hits by controlling the overall DFA size.\n To the best of our knowledge, ours is the first end-to-end system capable of matching REs at high speeds and in their full generality. Through a clever combination of RE grouping, and static and dynamic caching, it is able to perform RE matching at high speeds, even in the presence of limited memory. Through experiments with real-life data sets, we show that our RE matching system convincingly outperforms a state-of-the-art Network Intrusion Detection tool with support for efficient RE matching.",
"title": ""
},
{
"docid": "6b78a4b493e67dc367710a0cbd9e313b",
"text": "The identification of glandular tissue in breast X-rays (mammograms) is important both in assessing asymmetry between left and right breasts, and in estimating the radiation risk associated with mammographic screening. The appearance of glandular tissue in mammograms is highly variable, ranging from sparse streaks to dense blobs. Fatty regions are generally smooth and dark. Texture analysis provides a flexible approach to discriminating between glandular and fatty regions. We have performed a series of experiments investigating the use of granulometry and texture energy to classify breast tissue. Results of automatic classifications have been compared with a consensus annotation provided by two expert breast radiologists. On a set of 40 mammograms, a correct classification rate of 80% has been achieved using texture energy analysis.",
"title": ""
},
{
"docid": "3131a4b458e88b64271b05f5a4be1654",
"text": "They help identify and predict individual, as well as aggregate, behavior, as illustrated by four application domains: direct mail, retail, automobile insurance, and health care.",
"title": ""
},
{
"docid": "d53b8e8ad3365498e0036044c0b9d51e",
"text": "With the rise in global energy demand and environmental concerns about the use of fossil fuels, the need for rapid development of alternative fuels from sustainable, non-food sources is now well acknowledged. The effective utilization of low-cost high-volume agricultural and forest biomass for the production of transportation fuels and bio-based materials will play a vital role in addressing this concern [1]. The processing of lignocellulosic biomass, especially from mixed agricultural and forest sources with varying composition, is currently significantly more challenging than the bioconversion of corn starch or cane sugar to ethanol [1,2]. This is due to the inherent recalcitrance of lignocellulosic biomass to enzymatic and microbial deconstruction, imparted by the partly crystalline nature of cellulose and its close association with hemicellulose and lignin in the plant cell wall [2,3]. Pretreatments that convert raw lignocellulosic biomass to a form amenable to enzymatic degradation are therefore an integral step in the production of bioethanol from this material [4]. Chemical or thermochemical pretreatments act to reduce biomass recalcitrance in various ways. These include hemicellulose removal or degradation, lignin modification and/or delignification, reduction in crystallinity and degree of polymerization of cellulose, and increasing pore volume. Biomass pretreatments are an active focus of industrial and academic research efforts, and various strategies have been developed. Among commonly studied pretreatments, organosolv pretreatment, in which an aqueous organic solvent mixture is used as the pretreatment medium, results in the fractionation of the major biomass components, cellulose, lignin, and hemicellulose into three process streams [5,6]. Cellulose and lignin are recovered as separate solid streams, while hemicelluloses and sugar degradation products such as furfural and hydroxymethylfurfural (HMF) are released as a water-soluble fraction. The combination of ethanol as the solvent and",
"title": ""
},
{
"docid": "28fe178710bfa6487a7919312a854f7e",
"text": "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ¿ isclosely approximated by C - ¿(V/n) Q-1(¿) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.",
"title": ""
},
{
"docid": "ffadf882ac55d9cb06b77b3ce9a6ad8c",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
}
] |
scidocsrr
|
480a1efe0c46e913adda4d4a711420a0
|
Novel Polarization-Reconfigurable Converter Based on Multilayer Frequency-Selective Surfaces
|
[
{
"docid": "ce32b34898427802abd4cc9c99eac0bc",
"text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.",
"title": ""
}
] |
[
{
"docid": "8910a81438e6487da3856ea6b43dcc0e",
"text": "This paper describes a computer architecture, Spatial Computation (SC), which is based on the translation of high-level language programs directly into hardware structures. SC program implementations are completely distributed, with no centralized control. SC circuits are optimized for wires at the expense of computation units.In this paper we investigate a particular implementation of SC: ASH (Application-Specific Hardware). Under the assumption that computation is cheaper than communication, ASH replicates computation units to simplify interconnect, building a system which uses very simple, completely dedicated communication channels. As a consequence, communication on the datapath never requires arbitration; the only arbitration required is for accessing memory. ASH relies on very simple hardware primitives, using no associative structures, no multiported register files, no scheduling logic, no broadcast, and no clocks. As a consequence, ASH hardware is fast and extremely power efficient.In this work we demonstrate three features of ASH: (1) that such architectures can be built by automatic compilation of C programs; (2) that distributed computation is in some respects fundamentally different from monolithic superscalar processors; and (3) that ASIC implementations of ASH use three orders of magnitude less energy compared to high-end superscalar processors, while being on average only 33% slower in performance (3.5x worst-case).",
"title": ""
},
{
"docid": "120452d49d476366abcb52b86d8110b5",
"text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.",
"title": ""
},
{
"docid": "aa80366addac8af9cc5285f98663b9b6",
"text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。",
"title": ""
},
{
"docid": "90033efd960bf121e7041c9b3cd91cbd",
"text": "In this paper, we propose a novel framework for integrating geometrical measurements of monocular visual simultaneous localization and mapping (SLAM) and depth prediction using a convolutional neural network (CNN). In our framework, SLAM-measured sparse features and CNN-predicted dense depth maps are fused to obtain a more accurate dense 3D reconstruction including scale. We continuously update an initial 3D mesh by integrating accurately tracked sparse features points. Compared to prior work on integrating SLAM and CNN estimates [26], there are two main differences: Using a 3D mesh representation allows as-rigid-as-possible update transformations. We further propose a system architecture suitable for mobile devices, where feature tracking and CNN-based depth prediction modules are separated, and only the former is run on the device. We evaluate the framework by comparing the 3D reconstruction result with 3D measurements obtained using an RGBD sensor, showing a reduction in the mean residual error of 38% compared to CNN-based depth map prediction alone.",
"title": ""
},
{
"docid": "54c66f2021f055d3fb09f733ab1c2c39",
"text": "In December 2013, sixteen teams from around the world gathered at Homestead Speedway near Miami, FL to participate in the DARPA Robotics Challenge (DRC) Trials, an aggressive robotics competition, partly inspired by the aftermath of the Fukushima Daiichi reactor incident. While the focus of the DRC Trials is to advance robotics for use in austere and inhospitable environments, the objectives of the DRC are to progress the areas of supervised autonomy and mobile manipulation for everyday robotics. NASA’s Johnson Space Center led a team comprised of numerous partners to develop Valkyrie, NASA’s first bipedal humanoid robot. Valkyrie is a 44 degree-of-freedom, series elastic actuator-based robot that draws upon over 18 years of humanoid robotics design heritage. Valkyrie’s application intent is aimed at not only responding to events like Fukushima, but also advancing human spaceflight endeavors in extraterrestrial planetary settings. This paper presents a brief system overview, detailing Valkyrie’s mechatronic subsystems, followed by a summarization of the inverse kinematics-based walking algorithm employed at the Trials. Next, the software and control architectures are highlighted along with a description of the operator interface tools. Finally, some closing remarks are given about the competition and a vision of future work is provided.",
"title": ""
},
{
"docid": "411ac34baae4a8f5358dfdad6df8e800",
"text": "Bluetooth plays a major role in expanding global spread of wireless technology. This predominantly happens through Bluetooth enabled mobile phones, which cover almost 60% of the Bluetooth market. Although Bluetooth mobile phones are equipped with built-in security modes and policies, intruders compromise mobile phones through existing security vulnerabilities and limitations. Information stored in mobile phones, whether it is personal or corporate, is significant to mobile phone users. Hence, the need to protect information, as well as alert mobile phone users of their incoming connections, is vital. An additional security mechanism was therefore conceptualized, at the mobile phone's user level, which is essential in improving the security. Bluetooth Logging Agent (BLA) is a mechanism that has been developed for this purpose. It alleviates the current security issues by making the users aware of their incoming Bluetooth connections and gives them an option to either accept or reject these connections. Besides this, the intrusion detection and verification module uses databases and rules to authenticate and verify all connections. BLA when compared to the existing security solutions is novel and unique in that it is equipped with a Bluetooth message logging module. This logging module reduces the security risks by monitoring the Bluetooth communication between the mobile phone and the remote device.",
"title": ""
},
{
"docid": "a3e3ccb4dad5777196dcd3749295161e",
"text": "There are increasing volumes of spatio-temporal data from various sources such as sensors, social networks and urban environments. Analysis of such data requires flexible exploration and visualizations, but queries that span multiple geographical regions over multiple time slices are expensive to compute, making it challenging to attain interactive speeds for large data sets. In this paper, we propose a new indexing scheme that makes use of modern GPUs to efficiently support spatio-temporal queries over point data. The index covers multiple dimensions, thus allowing simultaneous filtering of spatial and temporal attributes. It uses a block-based storage structure to speed up OLAP-type queries over historical data, and supports query processing over in-memory and disk-resident data. We present different query execution algorithms that we designed to allow the index to be used in different hardware configurations, including CPU-only, GPU-only, and a combination of CPU and GPU. To demonstrate the effectiveness of our techniques, we implemented them on top of MongoDB and performed an experimental evaluation using two real-world data sets: New York City's (NYC) taxi data - consisting of over 868 million taxi trips spanning a period of five years, and Twitter posts - over 1.1 billion tweets collected over a period of 14 months. Our results show that our GPU-based index obtains interactive, sub-second response times for queries over large data sets and leads to at least two orders of magnitude speedup over spatial indexes implemented in existing open-source and commercial database systems.",
"title": ""
},
{
"docid": "b5a5c48f998f77a56821d03c7f8ad64e",
"text": "A microwave sensor having features useful for the noninvasive determination of blood glucose levels is described. The sensor output is an amplitude only measurement of the standing wave versus frequency sampled at a fixed point on an open-terminated spiral-shaped microstrip line. Test subjects press their thumb against the line and apply contact pressure sufficient to fall within a narrow pressure range. Data are reported for test subjects whose blood glucose is independently measured using a commercial glucometer.",
"title": ""
},
{
"docid": "f44bfa0a366fb50a571e6df9f4c3f91d",
"text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.",
"title": ""
},
{
"docid": "24e73ff615bb27e3f8f16746f496b689",
"text": "A physically-based computational technique was investigated which is intended to estimate an initial guess for complex values of the wavenumber of a disturbance leading to the solution of the fourth-order Orr–Sommerfeld (O–S) equation. The complex wavenumbers, or eigenvalues, were associated with the stability characteristics of a semi-infinite shear flow represented by a hyperbolic-tangent function. This study was devoted to the examination of unstable flow assuming a spatially growing disturbance and is predicated on the fact that flow instability is correlated with elevated levels of perturbation kinetic energy per unit mass. A MATLAB computer program was developed such that the computational domain was selected to be in quadrant IV, where the real part of the wavenumber is positive and the imaginary part is negative to establish the conditions for unstable flow. For a given Reynolds number and disturbance wave speed, the perturbation kinetic energy per unit mass was computed at various node points in the selected subdomain of the complex plane. The initial guess for the complex wavenumber to start the solution process was assumed to be associated with the highest calculated perturbation kinetic energy per unit mass. Once the initial guess had been approximated, it was used to obtain the solution to the O–S equation by performing a Runge–Kutta integration scheme that computationally marched from the far field region in the shear layer down to the lower solid boundary. Results compared favorably with the stability characteristics obtained from an earlier study for semi-infinite Blasius flow over a flat boundary. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b2120881f15885cdb610d231f514bc9f",
"text": "In this work we do an analysis of Bitcoin’s price and volatility. Particularly, we look at Granger-causation relationships among the pairs of time series: Bitcoin price and the S&P 500, Bitcoin price and the VIX, Bitcoin realized volatility and the S&P 500, and Bitcoin realized volatility and the VIX. Additionally, we explored the relationship between Bitcoin weekly price and public enthusiasm for Blockchain, the technology behind Bitcoin, as measured by Google Trends data. we explore the Granger-causality relationships between Bitcoin weekly price and Blockchain Google Trend time series. We conclude that there exists a bidirectional Granger-causality relationship between Bitcoin realized volatility and the VIX at the 5% significance level, that we cannot reject the hypothesis that Bitcoin weekly price do not Granger-causes Blockchain trends and that we cannot reject the hypothesis that Bitcoin realized volatility do not Granger-causes S&P 500.",
"title": ""
},
{
"docid": "c349eccb9a6d5b13289e2b24b1003cce",
"text": "A new hybrid model which combines wavelets and Artificial Neural Network (ANN) called wavelet neural network (WNN) model was proposed in the current study and applied for time series modeling of river flow. The time series of daily river flow of the Malaprabha River basin (Karnataka state, India) were analyzed by the WNN model. The observed time series are decomposed into sub-series using discrete wavelet transform and then appropriate sub-series is used as inputs to the neural network for forecasting hydrological variables. The hybrid model (WNN) was compared with the standard ANN and AR models. The WNN model was able to provide a good fit with the observed data, especially the peak values during the testing period. The benchmark results from WNN model applications showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models (ANN and AR).",
"title": ""
},
{
"docid": "8bcbb5d7ae6c57d60ff34abc1259349c",
"text": "Habitat remnants in urbanized areas typically conserve biodiversity and serve the recreation and urban open-space needs of human populations. Nevertheless, these goals can be in conflict if human activity negatively affects wildlife. Hence, when considering habitat remnants as conservation refuges it is crucial to understand how human activities and land uses affect wildlife use of those and adjacent areas. We used tracking data (animal tracks and den or bed sites) on 10 animal species and information on human activity and environmental factors associated with anthropogenic disturbance in 12 habitat fragments across San Diego County, California, to examine the relationships among habitat fragment characteristics, human activity, and wildlife presence. There were no significant correlations of species presence and abundance with percent plant cover for all species or with different land-use intensities for all species, except the opossum (Didelphis virginiana), which preferred areas with intensive development. Woodrats (Neotoma spp.) and cougars (Puma concolor) were associated significantly and positively and significantly and negatively, respectively, with the presence and prominence of utilities. Woodrats were also negatively associated with the presence of horses. Raccoons (Procyon lotor) and coyotes (Canis latrans) were associated significantly and negatively and significantly and positively, respectively, with plant bulk and permanence. Cougars and gray foxes (Urocyon cinereoargenteus) were negatively associated with the presence of roads. Roadrunners (Geococcyx californianus) were positively associated with litter. The only species that had no significant correlations with any of the environmental variables were black-tailed jackrabbits (Lepus californicus) and mule deer (Odocoileus hemionus). Bobcat tracks were observed more often than gray foxes in the study area and bobcats correlated significantly only with water availability, contrasting with results from other studies. Our results appear to indicate that maintenance of habitat fragments in urban areas is of conservation benefit to some animal species, despite human activity and disturbance, as long as the fragments are large.",
"title": ""
},
{
"docid": "14a3e0f52760802ae74a21cd0cb66507",
"text": "Credit scoring has been regarded as a core appraisal tool of different institutions during the last few decades, and has been widely investigated in different areas, such as finance and accounting. Different scoring techniques are being used in areas of classification and prediction, where statistical techniques have conventionally been used. Both sophisticated and traditional techniques, as well as performance evaluation criteria are investigated in the literature. The principal aim of this paper is to carry out a comprehensive review of 214 articles/books/theses that involve credit scoring applications in various areas, in general, but primarily in finance and banking, in particular. This paper also aims to investigate how credit scoring has developed in importance, and to identify the key determinants in the construction of a scoring model, by means of a widespread review of different statistical techniques and performance evaluation criteria. Our review of literature revealed that there is no overall best statistical technique used in building scoring models and the best technique for all circumstances does not yet exist. Also, the applications of the scoring methodologies have been widely extended to include different areas, and this subsequently can help decision makers, particularly in banking, to predict their clients‟ behaviour. Finally, this paper also suggests a number of directions for future research.",
"title": ""
},
{
"docid": "00cdaa724f262211919d4c7fc5bb0442",
"text": "With Tor being a popular anonymity network, many attacks have been proposed to break its anonymity or leak information of a private communication on Tor. However, guaranteeing complete privacy in the face of an adversary on Tor is especially difficult because Tor relays are under complete control of world-wide volunteers. Currently, one can gain private information, such as circuit identifiers and hidden service identifiers, by running Tor relays and can even modify their behaviors with malicious intent. This paper presents a practical approach to effectively enhancing the security and privacy of Tor by utilizing Intel SGX, a commodity trusted execution environment. We present a design and implementation of Tor, called SGX-Tor, that prevents code modification and limits the information exposed to untrusted parties. We demonstrate that our approach is practical and effectively reduces the power of an adversary to a traditional network-level adversary. Finally, SGX-Tor incurs moderate performance overhead; the end-to-end latency and throughput overheads for HTTP connections are 3.9% and 11.9%, respectively.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "06b43bbf61791a76c3455cb4d591d71e",
"text": "We present a feature-based framework that combines spatial feature clustering, guided sampling for pose generation, and model updating for 3D object recognition and pose estimation. Existing methods fails in case of repeated patterns or multiple instances of the same object, as they rely only on feature discriminability for matching and on the estimator capabilities for outlier rejection. We propose to spatially separate the features before matching to create smaller clusters containing the object. Then, hypothesis generation is guided by exploiting cues collected offand on-line, such as feature repeatability, 3D geometric constraints, and feature occurrence frequency. Finally, while previous methods overload the model with synthetic features for wide baseline matching, we claim that continuously updating the model representation is a lighter yet reliable strategy. The evaluation of our algorithm on challenging video sequences shows the improvement provided by our contribution.",
"title": ""
},
{
"docid": "63409b826dbe54bb3562fc7e313df2f0",
"text": "After more than ten years of experience with applications of fieldbus in automation technology, the industry has started to develop and adopt Real-Time Ethernet (RTE) solutions. There already exists now more than ten proposed solutions. International Electrotechnical Commission standards are trying to give a guideline and selection criteria based on recognized indicators for the user.",
"title": ""
},
{
"docid": "6f68ed77668f21696051947a8ccc4f56",
"text": "Most discussions of computer security focus on control of disclosure. In Particular, the U.S. Department of Defense has developed a set of criteria for computer mechanisms to provide control of classified information. However, for that core of data processing concerned with business operation and control of assets, the primary security concern is data integrity. This paper presents a policy for data integrity based on commercial data processing practices, and compares the mechanisms needed for this policy with the mechanisms needed to enforce the lattice model for information security. We argue that a lattice model is not sufficient to characterize integrity policies, and that distinct mechanisms are needed to Control disclosure and to provide integrity.",
"title": ""
},
{
"docid": "03bb2e60564a45f8a18213393759901f",
"text": "Surgical refinement of the wide nasal tip is challenging. Achieving an attractive, slender, and functional tip complex without destabilizing the lower nasal sidewall or deforming the contracture-prone alar rim is a formidable task. Excisional refinement techniques that rely upon incremental weakening of wide lower lateral cartilages (LLC) often destabilize the tip complex and distort tip contour. Initial destabilization of the LLC is usually further exacerbated by \"shrink-wrap\" contracture, which often leads to progressive cephalic retraction of the alar margin. The result is a misshapen tip complex accentuated by a conspicuous and highly objectionable nostril deformity that is often very difficult to treat. The \"articulated\" alar rim graft (AARG) is a modification of the conventional rim graft that improves treatment of secondary alar rim deformities, including postsurgical alar retraction (PSAR). Unlike the conventional alar rim graft, the AARG is sutured to the underlying tip complex to provide direct stationary support to the alar margin, thereby enhancing graft efficacy. When used in conjunction with a well-designed septal extension graft (SEG) to stabilize the central tip complex, lateral crural tensioning (LCT) to tighten the lower nasal sidewalls and minimize soft-tissue laxity, and lysis of scar adhesions to unfurl the retracted and scarred nasal lining, the AARG can eliminate PSAR in a majority of patients. The AARG is also highly effective for prophylaxis against alar retraction and in the treatment of most other contour abnormalities involving the alar margin. Moreover, the AARG requires comparatively little graft material, and complications are rare. We present a retrospective series of 47 consecutive patients treated with the triad of AARG, SEG, and LCT for prophylaxis and/or treatment of alar rim deformities. Outcomes were favorable in nearly all patients, and no complications were observed. We conclude the AARG is a simple and effective method for avoiding and correcting most alar rim deformities.",
"title": ""
}
] |
scidocsrr
|
63a564be39066e2720afc8e509dd6ed5
|
Compressed Sensing Image Reconstruction Via Recursive Spatially Adaptive Filtering
|
[
{
"docid": "7db9cf29dd676fa3df5a2e0e95842b6e",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "0771cd99e6ad19deb30b5c70b5c98183",
"text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.",
"title": ""
}
] |
[
{
"docid": "4abceedb1f6c735a8bc91bc811ce4438",
"text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.",
"title": ""
},
{
"docid": "e179fb33cddc0af76fdca117e20f6e30",
"text": "Face Recognition is widely used in security systems, such as surveillance, gate control systems, and guard robots, due to their user friendliness and convenience compared to other biometric approaches. Secure face recognition systems require advanced technology for face liveness detection, which can identify whether a face belongs to a real client or a portrait. However, with the development of display devices and technology, the tools and skills for carrying out spoofing attacks with images and videos have gradually evolved. In this paper, we compare real faces with high-definition facial videos from LED display devices, and present the changes in face recognition performance according to lighting direction.",
"title": ""
},
{
"docid": "138440edff015a10280ab87e9268ed48",
"text": "In this paper, we describe a new optical tracker algorithm for the tracking of interaction devices in virtual and augmented reality. The tracker uses invariant properties of marker patterns to efficiently identify and reconstruct the pose of these interaction devices. Since invariant properties are sensitive to noise in the 2D marker positions, an off-line training session is used to determine deviations in these properties. These deviations are taken into account when searching for the patterns once the tracker is used.",
"title": ""
},
{
"docid": "749800c4dae57eb13b5c3df9e0c302a0",
"text": "In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods.
In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement.
The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot.
The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals.
The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.",
"title": ""
},
{
"docid": "27461d678b02fff9a1aaf5621f5b347a",
"text": "Despite the promise of technology in education, many practicing teachers face several challenges when trying to effectively integrate technology into their classroom instruction. Additionally, while national statistics cite a remarkable improvement in access to computer technology tools in schools, teacher surveys show consistent declines in the use and integration of computer technology to enhance student learning. This article reports on primary technology integration barriers that mathematics teachers identified when using technology in their classrooms. Suggestions to overcome some of these barriers are also provided.",
"title": ""
},
{
"docid": "6669f61c302d79553a3e49a4f738c933",
"text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.",
"title": ""
},
{
"docid": "ed49841947208a654f621cdeb16930fd",
"text": "The advent of more sophisticated mobile eye-tracking equipment has lead in recent years to a growth of research into the deployment of visual attention and eye movements in natural environments. Car driving (Shinoda, Hayhoe & Shrivastava, 2001), playing cricket (Land & McLeod, 2000), walking (Jovanevic-Misic & Hayhoe, 2009), and the everyday task of making a cup of tea (Land, Mennie & Rusted, 1999), all require the tight coordination of attentional, cognitive, and motor abilities. Despite the research amassed from laboratory settings (e.g. Findlay, Brown & Gilchrist, 2001; McSorley & Findlay, 2003; Mulckhuyse, Van Zoest & Theeuwes, 2008; Born, Kerzel & Theeuwes, 2011), it is evident that the mechanisms of attentional deployment differ considerably in the lab compared to the real-world (e.g. Hayhoe & Ballard, 2005; Smilek, Eastwood, Reynolds & Kingstone, 2007; Kingstone, Smilek, Eastwood, 2008; Foulsham, Walker & Kingstone, 2011). Although we know from the seminal studies of Yarbus (1967), and more recent laboratory based work (e.g. Castelhano, Mack, & Henderson, 2009), that eye movements are highly task dependent and are linked to our cognitive goals, research is yet to uncover the eye movement repertoires associated with higher level tasks we encounter on a day-to-day basis. One such avenue is decision making. Almost all decisions we make involve acquisition of visual information but decision-making is a special kind of task where the information is valued very differently in each case. For each case, the kind of information needed to complete the task might differ largely due to different preferences or goals. One piece of information might be crucial for one person but not at all interesting to another. This calls for a new set of eye tracking measures that can be used to compare one cognitive process to another without relying on exactly what is being visually attended to. A prime example of the many choices we make in everyday life is the supermarket, and this setting provides the ideal scenario to investigate the eye-movement repertoires of decision-making in the real world. This is the focus of this paper. Using Eye Tracking to Trace a Cognitive Process: Gaze Behaviour During Decision Making in a Natural Environment",
"title": ""
},
{
"docid": "0ca445eed910eacccbb9f2cc9569181b",
"text": "Nanotechnology promises new solutions for many applications in the biomedical, industrial and military fields as well as in consumer and industrial goods. The interconnection of nanoscale devices with existing communication networks and ultimately the Internet defines a new networking paradigm that is further referred to as the Internet of Nano-Things. Within this context, this paper discusses the state of the art in electromagnetic communication among nanoscale devices. An in-depth view is provided from the communication and information theoretic perspective, by highlighting the major research challenges in terms of channel modeling, information encoding and protocols for nanonetworks and the Internet of Nano-Things.",
"title": ""
},
{
"docid": "dd412b31bc6f7f18ca18a54dc5267cc3",
"text": "We propose a partial information state-based framework for collaborative dialogue and argument between agents. We employ a three-valued based nonmonotonic logic, NML3, for representing and reasoning about Partial Information States (PIS). NML3 formalizes some aspects of revisable reasoning and it is sound and complete. Within the framework of NML3, we present a formalization of some basic dialogue moves and the rules of protocols of some types of dialogue. The rules of a protocol are nonmonotonic in the sense that the set of propositions to which an agent is committed and the validity of moves vary from one move to another. The use of PIS allows an agent to expand consistently its viewpoint with some of the propositions to which another agent, involved in a dialogue, is overtly committed. A proof method for the logic NML3 has been successfully implemented as an automatic theorem prover. We show, via some examples, that the tableau method employed to implement the theorem prover allows an agent, absolute access to every stage of a proof process. This access is useful for constructive argumentation and for finding cooperative and/or informative answers.",
"title": ""
},
{
"docid": "22ad829acba8d8a0909f2b8e31c1f0c3",
"text": "Covariance matrices capture correlations that are invaluable in modeling real-life datasets. Using all d elements of the covariance (in d dimensions) is costly and could result in over-fitting; and the simple diagonal approximation can be over-restrictive. In this work, we present a new model, the Low-Rank Gaussian Mixture Model (LRGMM), for modeling data which can be extended to identifying partitions or overlapping clusters. The curse of dimensionality that arises in calculating the covariance matrices of the GMM is countered by using low-rank perturbed diagonal matrices. The efficiency is comparable to the diagonal approximation, yet one can capture correlations among the dimensions. Our experiments reveal the LRGMM to be an efficient and highly applicable tool for working with large high-dimensional datasets.",
"title": ""
},
{
"docid": "1de34a2d824c485fc32ddcab6f408de5",
"text": "In recent years, what has become known as collaborative consumption has undergone rapid expansion through peer-to-peer (P2P) platforms. In the field of tourism, a particularly notable example is that of Airbnb, a service that puts travellers in contact with hosts for the purposes of renting accommodation, either rooms or entire homes/apartments. Although Airbnb may bring benefits to cities in that it increases tourist numbers, its concentration in certain areas of heritage cities can lead to serious conflict with the local population, as a result of rising rents and processes of gentrification. This article analyses the patterns of spatial distribution of Airbnb accommodation in Barcelona, one of Europe’s major tourist cities, and compares them with the accommodation offered by hotels and the places most visited by tourists. The study makes use of new sources of geolocated Big Data, such as Airbnb listings and geolocated photographs on Panoramio. Analysis of bivariate spatial autocorrelation reveals a close spatial relationship between the accommodation offered by Airbnb and the one offered by hotels, with a marked centre-periphery pattern, although Airbnb predominates over hotels around the city’s main hotel axis and hotels predominate over Airbnb in some peripheral areas of the city. Another interesting finding is that Airbnb capitalises more on the advantages of proximity to the city’s main tourist attractions than does the hotel sector. Finally, it was possible to detect those parts of the city that have seen the greatest increase in pressure from tourism related to Airbnb’s recent expansion.",
"title": ""
},
{
"docid": "76f6a4f44af78fae2960375f8c750878",
"text": "Recently in tandem with the spread of portable devices for reading electronic books, devices for digitizing paper books, called book scanners, are developed to meet the increased demand for digitizing privately owned books. However, conventional book scanners still have complex components to mechanically turn pages and to rectify the acquired images that are inevitably distorted by the curvy book surface. Here, we present the multi-scale mechanism that turns pages electronically using electroadhesive force generated by a micro-scale structure. Its another advantage is that perspective correction of image processing is applicable to readily reconstruct the distorted images of pages. Specifically, to turn one page at a time not two pages, we employ a micro-scale structure to generate near-field electroadhesive force that decays rapidly and accordingly attracts objects within tens of micrometers. We analyze geometrical parameters of the micro-scale structure to improve the decay characteristics. We find that the decay characteristics of electroadhesive force definitely depends upon the geometrical period of the micro-scale structure, while its magnitude depends on a variety of parameters. Based on this observation, we propose a novel electrode configuration with improved decay characteristics. Dynamical stability and kinematic requirements are also examined to successfully introduce near-field electroadhesive force into our digitizing process.",
"title": ""
},
{
"docid": "e93517eb28df17dddfc63eb7141368f9",
"text": "Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.",
"title": ""
},
{
"docid": "aa234355d0b0493e1d8c7a04e7020781",
"text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.",
"title": ""
},
{
"docid": "06f861c4ec87f00d9947f06f42c39d2d",
"text": "The relational database is provided from the traditional DBMS, which ensures the integrity of data and consistency of transactions. For many software applications, these are the principles of a proper DBMS. But in the last few years, witnessing the velocity of data growth and the lack of support from traditional databases for this issue, as a solution to it, the NoSQL (Not Only SQL) databases have appeared. These two kinds while being used for the same purposes (create, retrieve, update and manage data) they both have their own advantages and disadvantages over each other. The purpose of this study is to try and compare the research question of what are the pros and cons for each of these database' features and characteristics? This paper is a qualitative research, based on detailed and intensive analysis of the two database types, through use and comparison of some published materials during last few years.",
"title": ""
},
{
"docid": "9be50791156572e6e1a579952073d810",
"text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.",
"title": ""
},
{
"docid": "c0d2a2b5d9251bdd4fc65532abe3a152",
"text": "BACKGROUND\nTo improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets.\n\n\nOBJECTIVE\nThis study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data.\n\n\nMETHODS\nThis study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes.\n\n\nRESULTS\nWe are currently writing Auto-ML's design document. We intend to finish our study by around the year 2022.\n\n\nCONCLUSIONS\nAuto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes.",
"title": ""
},
{
"docid": "57c780448d8771a0d22c8ed147032a71",
"text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
},
{
"docid": "db35a26248d43d5fbf5a0bad0fdd1463",
"text": "Place is an essential concept in human discourse. It is people's interaction and experience with their surroundings that identify place from non-place in space. This paper explores the use of spatial footprints as a record of human interaction with the environment. Specifically, we use geotagged photos collected in Flickr to provide a collective view of sense of place, in terms of significance and location. Spatial footprints associated with photographs can not only describe individual place locations and spatial extents but also the relationship between places, such as hierarchy. This type of information about place may be utilized to study the way people understand their landscape, or can be incorporated into existing gazetteers for geographic information retrieval and location-based services. Other sources of user-generated geographic information, such as Foursquare and Twitter, may also be harvested and aggregated to study place in a similar way.",
"title": ""
}
] |
scidocsrr
|
86ecb9e9a707d7aec99232f2d9d3aba7
|
Investigating factors influencing local government decision makers while adopting integration technologies (IntTech)
|
[
{
"docid": "82fa51c143159f2b85f9d2e5b610e30d",
"text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d62c4280bbef1039a393e6949a164946",
"text": "Purpose – Achieving goals of better integrated and responsive government services requires moving away from stand alone applications toward more comprehensive, integrated architectures. As a result there is a mounting pressure to integrate disparate systems to support information exchange and cross-agency business processes. There are substantial barriers that governments must overcome to achieve these goals and to profit from enterprise application integration (EAI). Design/methodology/approach – In the research presented here we develop and test a methodology aimed at overcoming the barriers blocking adoption of EAI. This methodology is based on a discrete-event simulation of public sector structure, business processes and applications in combination with an EAI perspective. Findings – The testing suggests that our methodology helps to provide insight into the myriad of existing applications, and the implications of EAI. Moreover, it helps to identify novel options, gain stakeholder commitment, let them agree on the sharing of EAI costs, and finally it supports collaborative decision-making between public agencies. Practical implications – The approach is found to be useful for making the business case for EAI projects, and gaining stakeholder commitment prior to implementation. Originality/value – The joint addressing of the barriers of public sector reform including the transformation of the public sector structure, gaining of stakeholders’ commitment, understanding EAI technology and dependencies between cross-agency business processes, and a fair division of costs and benefits over stakeholders.",
"title": ""
}
] |
[
{
"docid": "77045e77d653bfa37dfbd1a80bb152da",
"text": "We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training.",
"title": ""
},
{
"docid": "b13a03598044db36ecf4634317071b34",
"text": "Space Religion Encryption Sport Science space god encryption player science satellite atheism device hall theory april exist technology defensive scientific sequence atheist protect team universe launch moral americans average experiment president existence chip career observation station marriage use league evidence radar system privacy play exist training parent industry bob god committee murder enforcement year mistake",
"title": ""
},
{
"docid": "8433df9d46df33f1389c270a8f48195d",
"text": "BACKGROUND\nFingertip injuries involve varying degree of fractures of the distal phalanx and nail bed or nail plate disruptions. The treatment modalities recommended for these injuries include fracture fixation with K-wire and meticulous repair of nail bed after nail removal and later repositioning of nail or stent substitute into the nail fold by various methods. This study was undertaken to evaluate the functional outcome of vertical figure-of-eight tension band suture for finger nail disruptions with fractures of distal phalanx.\n\n\nMATERIALS AND METHODS\nA series of 40 patients aged between 4 and 58 years, with 43 fingernail disruptions and fracture of distal phalanges, were treated with vertical figure-of-eight tension band sutures without formal fixation of fracture fragments and the results were reviewed. In this method, the injuries were treated by thoroughly cleaning the wound, reducing the fracture fragments, anatomical replacement of nail plate, and securing it by vertical figure-of-eight tension band suture.\n\n\nRESULTS\nAll patients were followed up for a minimum of 3 months. The clinical evaluation of the patients was based on radiological fracture union and painless pinch to determine fingertip stability. Every single fracture united and every fingertip was clinically stable at the time of final followup. We also evaluated our results based on visual analogue scale for pain and range of motion of distal interphalangeal joint. Two sutures had to be revised due to over tensioning and subsequent vascular compromise within minutes of repair; however, this did not affect the final outcome.\n\n\nCONCLUSION\nThis technique is simple, secure, and easily reproducible. It neither requires formal repair of injured nail bed structures nor fixation of distal phalangeal fracture and results in uncomplicated reformation of nail plate and uneventful healing of distal phalangeal fractures.",
"title": ""
},
{
"docid": "03c78195651c965219394117cfafcabc",
"text": "Cognitive radio technology, a revolutionary communication paradigm that can utilize the existing wireless spectrum resources more efficiently, has been receiving a growing attention in recent years. As network users need to adapt their operating parameters to the dynamic environment, who may pursue different goals, traditional spectrum sharing approaches based on a fully cooperative, static, and centralized network environment are no longer applicable. Instead, game theory has been recognized as an important tool in studying, modeling, and analyzing the cognitive interaction process. In this tutorial survey, we introduce the most fundamental concepts of game theory, and explain in detail how these concepts can be leveraged in designing spectrum sharing protocols, with an emphasis on state-of-the-art research contributions in cognitive radio networking. Research challenges and future directions in game theoretic modeling approaches are also outlined. This tutorial survey provides a comprehensive treatment of game theory with important applications in cognitive radio networks, and will aid the design of efficient, self-enforcing, and distributed spectrum sharing schemes in future wireless networks. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "94b061285a0ca52aa0e82adcca392416",
"text": "Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the concept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.",
"title": ""
},
{
"docid": "3b9adb452f628a3cf5153b80f1977bc4",
"text": "Small signal stability analysis is conducted considering grid connected doubly-fed induction generator (DFIG) type. The modeling of a grid connected DFIG system is first set up and the whole model is formulated by a set of differential algebraic equations (DAE). Then, the mathematical model of rotor-side converter is built with decoupled P-Q control techniques to implement stator active and reactive powers control. Based on the abovementioned researches, the small signal stability analysis is carried out to explore and compared the differences between the whole system with the decoupled P-Q controller or not by eigenvalues and participation factors. Finally, numerical results demonstrate the system are stable, especially some conclusions and comments of interest are made. DFIG model; decoupled P-Q control; DAE; small signal analysis;",
"title": ""
},
{
"docid": "ed4050c6934a5a26fc377fea3eefa3bc",
"text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.",
"title": ""
},
{
"docid": "97a458ead2bd94775c7d27a6a47ce8e6",
"text": "This article presents an approach to using cognitive models of narrative discourse comprehension to define an explicit computational model of a reader’s comprehension process during reading, predicting aspects of narrative focus and inferencing with precision. This computational model is employed in a narrative discourse generation system to select and sequence content from a partial plan representing story world facts, objects, and events, creating discourses that satisfy comprehension criteria. Cognitive theories of narrative discourse comprehension define explicit models of a reader’s mental state during reading. These cognitive models are created to test hypotheses and explain empirical results about reader comprehension, but do not often contain sufficient precision for implementation on a computer. Therefore, they have not previously been suitable for computational narrative generation. The results of three experiments are presented and discussed, exhibiting empirical support for the approach presented. This work makes a number of contributions that advance the state-of-the-art in narrative discourse generation: a formal model of narrative focus, a formal model of online inferencing in narrative, a method of selecting narrative discourse content to satisfy comprehension criteria, and both implementation and evaluation of these models. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "45bd038dd94d388f945c041e7c04b725",
"text": "Entomophagy is widespread among nonhuman primates and is common among many human communities. However, the extent and patterns of entomophagy vary substantially both in humans and nonhuman primates. Here we synthesize the literature to examine why humans and other primates eat insects and what accounts for the variation in the extent to which they do so. Variation in the availability of insects is clearly important, but less understood is the role of nutrients in entomophagy. We apply a multidimensional analytical approach, the right-angled mixture triangle, to published data on the macronutrient compositions of insects to address this. Results showed that insects eaten by humans spanned a wide range of protein-to-fat ratios but were generally nutrient dense, whereas insects with high protein-to-fat ratios were eaten by nonhuman primates. Although suggestive, our survey exposes a need for additional, standardized, data.",
"title": ""
},
{
"docid": "ac7156831175817cc9c0e81d2f0bb980",
"text": "Social networking sites (SNS) have become a significant component of people’s daily lives and have revolutionized the ways that business is conducted, from product development and marketing to operation and human resource management. However, there have been few systematic studies that ask why people use such systems. To try to determine why, we proposed a model based on uses and gratifications theory. Hypotheses were tested using PLS on data collected from 148 SNS users. We found that user utilitarian (rational and goal-oriented) gratifications of immediate access and coordination, hedonic (pleasure-oriented) gratifications of affection and leisure, and website social presence were positive predictors of SNS usage. While prior research focused on the hedonic use of SNS, we explored the predictive value of utilitarian factors in SNS. Based on these findings, we suggest a need to focus on the SNS functionalities to provide users with both utilitarian and hedonic gratifications, and suggest incorporating appropriate website features to help users evoke a sense of human contact in the SNS context.",
"title": ""
},
{
"docid": "beca7993e709b58788a4513893b14413",
"text": "We present a micro-traffic simulation (named “DeepTraffic”) where the perception, control, and planning systems for one of the cars are all handled by a single neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of DQN variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space with the objective of their neural network submission to make it onto the top-10 leaderboard.",
"title": ""
},
{
"docid": "0bbc77e1269a49be659a777c390d408a",
"text": "With the evolution of Telco systems towards 5G, new requirements emerge for delivering services. Network services are expected to be designed to allow greater flexibility. In order to cope with the new users' requirements, Telcos should rethink their complex and monolithic network architectures into more agile architectures. Adoption of NFV as well as micro-services patterns are opportunities promising such an evolution. However, to gain in flexibility, it is crucial to satisfy structural requirements for the design of VNFs as services. We present in this paper an approach for designing VNF-asa-Service. With this approach, we define design requirements for the service architecture and the service logic of VNFs. As Telcos have adopted IMS as the de facto platform for service delivery in 3G and even 4G systems, it is interesting to study its evolution for 5G towards a microservices-based architecture with an optimal design. Therefore, we consider IMS as a case of study to illustrate the proposed approach. We present new functional entities for IMS-as-a-Service through a functional decomposition of legacy network functions. We have developed and implemented IMS-as-a-Service with respect to the proposed requirements. We consider a service scenario where we focus on authentication and authorization procedures. We evaluate the involved microservices comparing to the state-of-the-art. Finally, we discuss our results and highlight the advantages of our approach.",
"title": ""
},
{
"docid": "f0ae0c563ce34478dae8a2315624d6d2",
"text": "Nanocrystalline cellulose (NCC) is an emerging renewable nanomaterial that holds promise in many different applications, such as in personal care, chemicals, foods, pharmaceuticals, etc. By appropriate modification of NCC, various functional nanomaterials with outstanding properties, or significantly improved physical, chemical, biological, as well as electronic properties can be developed. The nanoparticles are stabilised in aqueous suspension by negative charges on the surface, which are produced during the acid hydrolysis process. NCC suspensions can form a chiral nematic ordered phase beyond a critical concentration, i.e. NCC suspensions transform from an isotropic to an anisotropic chiral nematic liquid crystalline phase. Due to its nanoscale dimension and intrinsic physicochemical properties, NCC is a promising renewable biomaterial that can be used as a reinforcing component in high performance nanocomposites. Many new nanocomposite materials with attractive properties were obtained by the physical incorporation of NCC into a natural or synthetic polymeric matrix. Simple chemical modification on NCC surface can improve its dispersability in different solvents and expand its utilisation in nano-related applications, such as drug delivery, protein immobilisation, and inorganic reaction template. This review paper provides an overview on this emerging nanomaterial, focusing on the surface modification, properties and applications of NCC.",
"title": ""
},
{
"docid": "6f1da2d00f63cae036db04fd272b8ef2",
"text": "Female genital cosmetic surgery is surgery performed on a woman within a normal range of variation of human anatomy. The issues are heightened by a lack of long-term and substantive evidence-based literature, conflict of interest from personal financial gain through performing these procedures, and confusion around macroethical and microethical domains. It is a source of conflict and controversy globally because the benefit and harm of offering these procedures raise concerns about harmful cultural views, education, and social vulnerability of women with regard to both ethics and human rights. The rights issues of who is defining normal female anatomy and function, as well as the economic vulnerability of women globally, bequeath the profession a greater responsibility to ensure that there is adequate health and general education-not just among patients but broadly in society-that there is neither limitation nor interference in the decision being made, and that there are no psychological disorders that could be influencing such choices.",
"title": ""
},
{
"docid": "566412870c83e5e44fabc50487b9d994",
"text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.",
"title": ""
},
{
"docid": "7457c09c1068ba1397f468879bc3b0d1",
"text": "Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.",
"title": ""
},
{
"docid": "bc90b1e4d456ca75b38105cc90d7d51d",
"text": "Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "282a6b06fb018fb7e2ec223f74345944",
"text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.",
"title": ""
},
{
"docid": "e18b4c013b36e198349185be70396ea0",
"text": "In 2004 and 2005, Coca-Cola Enterprises (CCE)—the world’s largest bottler and distributor of Coca-Cola products—implemented ORTEC’s vehicle-routing software. Today, over 300 CCE dispatchers use this software daily to plan the routes of approximately 10,000 trucks. In addition to handling nonstandard constraints, the implementation is notable for its progressive transition from the prior business practice. CCE has realized an annual cost saving of $45 million and major improvements in customer service. This approach has been so successful that Coca-Cola has extended it beyond CCE to other Coca-Cola bottling companies and beer distributors.",
"title": ""
}
] |
scidocsrr
|
ac7fca8e0bbb8011afb0cc597c60243a
|
From ratings to trust: an empirical study of implicit trust in recommender systems
|
[
{
"docid": "107aff0162fb0b6c1f90df1bdf7174b7",
"text": "Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings.",
"title": ""
},
{
"docid": "790dd7f42d3dc7d980e0c1a404274952",
"text": "Recommender systems have proven to be an important response to the information overload problem, by providing users with more proactive and personalized information services. And collaborative filtering techniques have proven to be an vital component of many such recommender systems as they facilitate the generation of high-quality recom-mendations by leveraging the preferences of communities of similar users. In this paper we suggest that the traditional emphasis on user similarity may be overstated. We argue that additional factors have an important role to play in guiding recommendation. Specifically we propose that the trustworthiness of users must be an important consideration. We present two computational models of trust and show how they can be readily incorporated into standard collaborative filtering frameworks in a variety of ways. We also show how these trust models can lead to improved predictive accuracy during recommendation.",
"title": ""
},
{
"docid": "745bbe075634f40e6c66716a6b877619",
"text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.",
"title": ""
}
] |
[
{
"docid": "4f31b16c53632e2d1ae874a692e5b64e",
"text": "Previously published algorithms for finding the longest common subsequence of two sequences of length n have had a best-case running time of O(n2). An algorithm for this problem is presented which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match. Thus in the worst case the algorithm has a running time of O(n2 log n). However, for those applications where most positions of one sequence match relatively few positions in the other sequence, a running time of O(n log n) can be expected.",
"title": ""
},
{
"docid": "1c02a92b4fbabddcefccd4c347186c60",
"text": "Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.",
"title": ""
},
{
"docid": "df331d60ab6560808e28e3813766b67b",
"text": "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications.",
"title": ""
},
{
"docid": "4ace08e06cd27fdfb85708cc95791952",
"text": "In this research communication on commutative algebra it was proposed to deal with Grobner Bases and its applications in signals and systems domain.This is one of the pioneering communications in dealing with Cryo-EM Image Processing application using multi-disciplinary concepts involving thermodynamics and electromagnetics based on first principles approach. keywords: Commutative Algebra/HOL/Scala/JikesRVM/Cryo-EM Images/CoCoALib/JAS Introduction & Inspiration : Cryo-Electron Microscopy (Cryo-EM) is an expanding structural biology technique that has recently undergone a quantum leap progression in its applicability to the study of challenging nano-bio systems,because crystallization is not required,only small amounts of sample are needed, and because images can be classified using a computer, the technique has the promising potential to deal with compositional as well as conformational mixtures.Cryo-EM can be used to investigate the complete and fully functional macromolecular complexes in different functional states, providing a richness of nano-bio systems insight. In this short communication,pointing to some of the principles behind the Cryo-EM methodology of single particle analysis via references and discussing Grobner bases application to challenging systems of paramount nano-bio importance is interesting. Special emphasis is on new methodological developments that are leading to an explosion of new studies, many of which are reaching resolutions that could only be dreamed of just few years ago.[1-9][Figures I-IV] There are two main challenges facing researchers in Cryo-EM Image Processing : “(1) The first challenge is that the projection images are extremely noisy (due to the low electron dose that can interact with each molecule before it is destroyed). (2) The second is that the orientations of the molecules that produced every image is unknown (unlike crystallography where the molecules are packed in a form of a crystal and therefore share the same known orientation).Overcoming these two challenges are very much principal in the science of CryoEM. “ according to Prof. Hadani. In the context of above mentioned challenges we intend to investigate and suggest Grobner bases to process Cryo-EM Images using Thermodynamics and Electromagnetics principles.The inspiration to write this short communication was derived mainly from the works of Prof.Buchberger and Dr.Rolf Landauer. source : The physical nature of information Rolf Landauer IBM T.J. Watson Research Center, P.O. Box 218. Yorktown Heights, NY 10598, USA . source : Gröbner Bases:A Short Introduction for Systems Theorists -Bruno Buchberger Research Institute for Symbolic Computation University of Linz,A4232 Schloss,Hagenberg,Austria. Additional interesting facts are observed from an article by Jon Cohen : “Structural Biology – Is HighTech View of HIV Too Good To Be True ?”. (http://davidcrowe.ca/SciHealthEnv/papers/9599-IsHighTechViewOfHIVTooGoodToBeTrue.pdf) Researchers are only interested in finding better software tools to refine the cryo-em image processing tasks on hand using all the mathematical tools at their disposal.Commutative Algebra is one such promising tool.Hence the justification for using Grobner Bases. Informatics Framework Design,Implementation & Analysis : Figure I. Mathematical Algorithm Implementation and Software Architecture -Overall Idea presented in the paper.Self Explanatory Graphical Algorithm Please Note : “Understanding JikesRVM in the Context of Cryo-EM/TEM/SEM Imaging Algorithms and Applications – A General Informatics Introduction from a Software Architecture View Point” by Nirmal & Gagik 2016 could be useful. Figure II. Mathematical Algorithm with various Grobner Bases Mathematical Tools/Software.Self Explanatory Graphical Algorithm Figure III.Scala and Java based Software Architecture Flow Self Explanatory Graphical Algorithm Figure IV. Mathematical Algorithm involving EM Field Theory & Thermodynamics Self Explanatory Graphical Algorithm",
"title": ""
},
{
"docid": "44d468d53b66f719e569ea51bb94f6cb",
"text": "The paper gives an overview on the developments at the German Aerospace Center DLR towards anthropomorphic robots which not only tr y to approach the force and velocity performance of humans, but also have simi lar safety and robustness features based on a compliant behaviour. We achieve thi s compliance either by joint torque sensing and impedance control, or, in our newes t systems, by compliant mechanisms (so called VIA variable impedance actuators), whose intrinsic compliance can be adjusted by an additional actuator. Both appr o ches required highly integrated mechatronic design and advanced, nonlinear con trol a d planning strategies, which are presented in this paper.",
"title": ""
},
{
"docid": "a0b2219d315b9ee35af9e412a174875b",
"text": "VP-ellipsis generally requires a syntactically matching antecedent. However, many documented examples exist where the antecedent is not appropriate. Kehler (2000, 2002) proposed an elegant theory which predicts a syntactic antecedent for an elided VP is required only for a certain discourse coherence relation (resemblance) not for cause-effect relations. Most of the data Kehler used to motivate his theory come from corpus studies and thus do not consist of true minimal pairs. We report five experiments testing predictions of the coherence theory, using standard minimal pair materials. The results raise questions about the empirical basis for coherence theory because parallelism is preferred for all coherence relations, not just resemblance relations. Further, strict identity readings, which should not be available when a syntactic antecedent is required, are influenced by parallelism per se, holding the discourse coherence relation constant. This draws into question the causal role of coherence relations in processing VP ellipsis.",
"title": ""
},
{
"docid": "33bd561e2d8e1799d5d5156cbfe3f2e5",
"text": "OBJECTIVE\nTo assess the effects of Balint groups on empathy measured by the Consultation And Relational Empathy Measure (CARE) scale rated by standardized patients during objective structured clinical examination and self-rated Jefferson's School Empathy Scale - Medical Student (JSPE-MS©) among fourth-year medical students.\n\n\nMETHODS\nA two-site randomized controlled trial were planned, from October 2015 to December 2015 at Paris Diderot and Paris Descartes University, France. Eligible students were fourth-year students who gave their consent to participate. Participants were allocated in equal proportion to the intervention group or to the control group. Participants in the intervention group received a training of 7 sessions of 1.5-hour Balint groups, over 3months. The main outcomes were CARE and the JSPE-MS© scores at follow-up.\n\n\nRESULTS\nData from 299 out of 352 randomized participants were analyzed: 155 in the intervention group and 144 in the control group, with no differences in baseline measures. There was no significant difference in CARE score at follow-up between the two groups (P=0.49). The intervention group displayed significantly higher JSPE-MS© score at follow-up than the control group [Mean (SD): 111.9 (10.6) versus 107.7 (12.7), P=0.002]. The JSPE-MS© score increased from baseline to follow-up in the intervention group, whereas it decreased in the control group [1.5 (9.1) versus -1.8 (10.8), P=0.006].\n\n\nCONCLUSIONS\nBalint groups may contribute to promote clinical empathy among medical students.\n\n\nTRIAL REGISTRATION\nNCT02681380.",
"title": ""
},
{
"docid": "de6e139d0b5dc295769b5ddb9abcc4c6",
"text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.",
"title": ""
},
{
"docid": "b84258e07801a549ac276f76dd4de366",
"text": "The predictions of growing consumer power in the digital age that predated the turn of the century were fueled by the rise of the Internet, then reignited by social media. This article explores the intersection of consumer behavior and digital media by clearly defining consumer power and empowerment in Internet and social media contexts and by presenting a theoretical framework of four distinct consumer power sources: demand-, information-, network-, and crowd-based power. Furthermore, we highlight technology's evolutionary role in the development of these power sources and discuss the nature of shifts in power from marketers to consumers in terms of each source. The framework organizes prior marketing literature on Internet-enabled consumer empowerment and highlights gaps in current research. Specific research questions are elaborated for each source of power outlining the agenda for future research areas. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dd36befad0dc53413940acbb866f324e",
"text": "Lignocellulosic materials can be explored as one of the sustainable substrates for bioethanol production through microbial intervention as they are abundant, cheap and renewable. But at the same time, their recalcitrant structure makes the conversion process more cumbersome owing to their chemical composition which adversely affects the efficiency of bioethanol production. Therefore, the technical approaches to overcome recalcitrance of biomass feedstock has been developed to remove the barriers with the help of pretreatment methods which make cellulose more accessible to the hydrolytic enzymes, secreted by the microorganisms, for its conversion to glucose. Pretreatment of lignocellulosic biomass in cost effective manner is a major challenge to bioethanol technology research and development. Hence, in this review, we have discussed various aspects of three commonly used pretreatment methods, viz., steam explosion, acid and alkaline, applied on various lignocellulosic biomasses to augment their digestibility alongwith the challenges associated with their processing.",
"title": ""
},
{
"docid": "ce167e13e5f129059f59c8e54b994fd4",
"text": "Critical research has emerged as a potentially important stream in information systems research, yet the nature and methods of critical research are still in need of clarification. While criteria or principles for evaluating positivist and interpretive research have been widely discussed, criteria or principles for evaluating critical social research are lacking. Therefore, the purpose of this paper is to propose a set of principles for the conduct of critical research. This paper has been accepted for publication in MIS Quarterly and follows on from an earlier piece that suggested a set of principles for interpretive research (Klein and Myers, 1999). The co-author of this paper is Heinz Klein.",
"title": ""
},
{
"docid": "bed344f60b3fbc91330e74100d4a50e0",
"text": "Abstract Text mining has become an exciting research field as it tries to discover valuable information from unstructured texts. The unstructured texts which contain vast amount of information cannot simply be used for further processing by computers. Therefore, exact processing methods, algorithms and techniques are vital in order to extract this valuable information which is completed by using text mining. In this paper, we have discussed general idea of text mining and comparison of its techniques. In addition, we briefly discuss a number of text mining applications which are used presently and in future.Text mining has become an exciting research field as it tries to discover valuable information from unstructured texts. The unstructured texts which contain vast amount of information cannot simply be used for further processing by computers. Therefore, exact processing methods, algorithms and techniques are vital in order to extract this valuable information which is completed by using text mining. In this paper, we have discussed general idea of text mining and comparison of its techniques. In addition, we briefly discuss a number of text mining applications which are used presently and in future.",
"title": ""
},
{
"docid": "8105026d2c04002f9df3fe74467b6636",
"text": "The Intel Software Guard Extensions (SGX) technology, recently introduced in the new generations of x86 processors, allows the execution of applications in a fully protected environment (i.e., within enclaves). Because it is a recent technology, machines that rely on this technology are still a minority. In order to evaluate the SGX, an emulator of this technology (called OpenSGX) implements and replicates the main functionalities and structures used in SGX. The focus is to evaluate the resulting overhead from running an application within an environment with emulated SGX. For the evaluation, benchmark applications from the MiBench platform were employed. As performance metrics, we gathered the total number of instructions and the total number of CPU cycles for the execution of each application with and without OpenSGX.",
"title": ""
},
{
"docid": "c582e3c1f3896e5f86b0d322184582fd",
"text": "The interest for data mining techniques has increased tremendously during the past decades, and numerous classification techniques have been applied in a wide range of business applications. Hence, the need for adequate performance measures has become more important than ever. In this paper, a cost-benefit analysis framework is formalized in order to define performance measures which are aligned with the main objectives of the end users, i.e., profit maximization. A new performance measure is defined, the expected maximum profit criterion. This general framework is then applied to the customer churn problem with its particular cost-benefit structure. The advantage of this approach is that it assists companies with selecting the classifier which maximizes the profit. Moreover, it aids with the practical implementation in the sense that it provides guidance about the fraction of the customer base to be included in the retention campaign.",
"title": ""
},
{
"docid": "d5a9d2a212deee5057a0289f72b51d9b",
"text": "Compared to supervised feature selection, unsupervised feature selection tends to be more challenging due to the lack of guidance from class labels. Along with the increasing variety of data sources, many datasets are also equipped with certain side information of heterogeneous structure. Such side information can be critical for feature selection when class labels are unavailable. In this paper, we propose a new feature selection method, SideFS, to exploit such rich side information. We model the complex side information as a heterogeneous network and derive instance correlations to guide subsequent feature selection. Representations are learned from the side information network and the feature selection is performed in a unified framework. Experimental results show that the proposed method can effectively enhance the quality of selected features by incorporating heterogeneous side information.",
"title": ""
},
{
"docid": "6defa20be9dfa250a4472910259497ed",
"text": "Lightweight cryptography has been one of the “hot topics” in symmetric cryptography in the recent years. A huge number of lightweight algorithms have been published, standardized and/or used in commercial products. In this paper, we discuss the different implementation constraints that a “lightweight” algorithm is usually designed to satisfy. We also present an extensive survey of all lightweight symmetric primitives we are aware of. It covers designs from the academic community, from government agencies and proprietary algorithms which were reverse-engineered or leaked. Relevant national (nist...) and international (iso/iec...) standards are listed. We then discuss some trends we identified in the design of lightweight algorithms, namely the designers’ preference for arx-based and bitsliced-S-Box-based designs and simple key schedules. Finally, we argue that lightweight cryptography is too large a field and that it should be split into two related but distinct areas: ultra-lightweight and IoT cryptography. The former deals only with the smallest of devices for which a lower security level may be justified by the very harsh design constraints. The latter corresponds to low-power embedded processors for which the Aes and modern hash function are costly but which have to provide a high level security due to their greater connectivity.",
"title": ""
},
{
"docid": "8ab92b0433199ab915b5cf4309660395",
"text": "Within the large body of research in complex network analysis, an important topic is the temporal evolution of networks. Existing approaches aim at analyzing the evolution on the global and the local scale, extracting properties of either the entire network or local patterns. In this paper, we focus instead on detecting clusters of temporal snapshots of a network, to be interpreted as eras of evolution. To this aim, we introduce a novel hierarchical clustering methodology, based on a dissimilarity measure (derived from the Jaccard coefficient) between two temporal snapshots of the network. We devise a framework to discover and browse the eras, either in top-down or a bottom-up fashion, supporting the exploration of the evolution at any level of temporal resolution. We show how our approach applies to real networks, by detecting eras in an evolving co-authorship graph extracted from a bibliographic dataset; we illustrate how the discovered temporal clustering highlights the crucial moments when the network had profound changes in its structure. Our approach is finally boosted by introducing a meaningful labeling of the obtained clusters, such as the characterizing topics of each discovered era, thus adding a semantic dimension to our analysis.",
"title": ""
},
{
"docid": "41f4b0c55392ed3a2b59e4bbaec7566f",
"text": "Lithium-ion (Li-ion) batteries are ubiquitous sources of energy for portable electronic devices. Compared to alternative battery technologies, Li-ion batteries provide one of the best energy-to-weight ratios, exhibit no memory effect, and have low self-discharge when not in use. These beneficial properties, as well as decreasing costs, have established Li-ion batteries as a leading candidate for the next generation of automotive and aerospace applications. In the automotive sector, increasing demand for hybrid electric vehicles (HEVs), plug-in HEVs (PHEVs), and EVs has pushed manufacturers to the limits of contemporary automotive battery technology. This limitation is gradually forcing consideration of alternative battery technologies, such as Li-ion batteries, as a replacement for existing leadacid and nickel-metal-hydride batteries. Unfortunately, this replacement is a challenging task since automotive applications demand large amounts of energy and power and must operate safely, reliably, and durably at these scales. The article presents a detailed description and model of a Li-ion battery. It begins the section \"Intercalation-Based Batteries\" by providing an intuitive explanation of the fundamentals behind storing energy in a Li-ion battery. In the sections \"Modeling Approach\" and \"Li-Ion Battery Model,\" it present equations that describe a Li-ion cell's dynamic behavior. This modeling is based on using electrochemical principles to develop a physics-based model in contrast to equivalent circuit models. A goal of this article is to present the electrochemical model from a controls perspective.",
"title": ""
},
{
"docid": "d411b5b732f9d7eec4fc065bc410ae1b",
"text": "What do you do to start reading robot hands and the mechanics of manipulation? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this robot hands and the mechanics of manipulation.",
"title": ""
},
{
"docid": "9a28137d2acc2030205b4324dc21977a",
"text": "Corona discharge generated by various electrode arrangements is commonly employed for several electrostatic applications, such as charging nonwoven fabrics for air filters and insulating granules in electrostatic separators. The aim of this paper is to analyze the effects of the presence of a grounded metallic shield in the proximity of a high-voltage corona electrode facing a grounded plate electrode. The metallic shield was found to increase the current intensity and decrease the inception voltage of the corona discharge generated by this electrode arrangement, both in the absence and in the presence of a layer of insulating particles at the surface of the plate electrode. With the shield, the current density measured at the surface of the collecting electrode is higher and distributed on a larger area. As a consequence, the charge acquired by millimeter-sized HDPE particles forming a monolayer at the surface of the grounded plate electrode is twice as high as in the absence of the shield. These experiments are discussed in relation with the results of the numerical analysis of the electric field generated by the wire-plate configuration with and without shield.",
"title": ""
}
] |
scidocsrr
|
c702cfa76f84a4fa5c9a3564cf856b72
|
The one comparing narrative social network extraction techniques
|
[
{
"docid": "bd3b9d9e8a1dc39f384b073765175de6",
"text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.",
"title": ""
},
{
"docid": "a29ee41e8f46d1feebeb67886b657f70",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
}
] |
[
{
"docid": "7456842efeebb480c21974f78aea2a9f",
"text": "Connectionist networks that have learned one task can be reused on related tasks in a process that is called \"transfer\". This paper surveys recent work on transfer. A number of distinctions between kinds of transfer are identified, and future directions for research are explored. The study of transfer has a long history in cognitive science. Discoveries about transfer in human cognition can inform applied efforts. Advances in applications can also inform cognitive studies.",
"title": ""
},
{
"docid": "85e227c86077c728ae7bdbd78f781186",
"text": "What is the state of the neuroscience of language – and cognitive neuroscience more broadly – in light of the linguistic research, the arguments, and the theories advanced in the context of the program developed over the past 60 years by Noam Chomsky? There are, presumably, three possible outcomes: neuroscience of language is better off, worse off, or untouched by this intellectual tradition. In some sense, all three outcomes are true. The field has made remarkable progress, in no small part because the questions were so carefully and provocatively defined by the generative research program. But insights into neuroscience and language have also been stymied because of many parochial battles that have led to little light beyond rhetorical fireworks. Finally, a disturbing amount of neuroscience research has progressed as if the significant advances beginning in the 1950s and 1960s had not been made. This work remains puzzling because it builds on ideas known to be dodgy or outright false. In sum, when it comes to the neurobiology of language, the past sixty years have been fabulous, terrible, and puzzling. Chomsky has not helped matters by being so relentlessly undidactic in his exposition of ideas germane to the neurobiological enterprise. The present moment is a good one to assess the current state, because there are energetic thrusts of research that pursue an overtly anti-Chomskyan stance. I have in mind here current research that focuses on big (brain) data, relying on no more than the principle of association, often with implicit anti-mentalist sentiments, typically skeptical of the tenets of the computational theory of mind, associated with relentless enthusiasm for embodied cognition, the ubiquitous role of context, and so on. A large proportion of current research on the neuroscience of language has embraced these ideas, and it is fair to ask why – and whether – this approach is more likely to yield substantive progress. It is also fair to say that the traditional four (and now five) leading questions that have always formed the basis for the generative research program as",
"title": ""
},
{
"docid": "2f83b2ef8f71c56069304b0962074edc",
"text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.",
"title": ""
},
{
"docid": "1000855a500abc1f8ef93d286208b600",
"text": "Nowadays, the most widely used variable speed machine for wind turbine above 1MW is the doubly fed induction generator (DFIG). As the wind power penetration continues to increase, wind turbines are required to provide Low Voltage Ride-Through (LVRT) capability. Crowbars are commonly used to protect the power converters during voltage dips. Its main drawback is that the DFIG absorbs reactive power from the grid during grid faults. This paper proposes an improved control strategy for the crowbar protection to reduce its operation time. And a simple demagnetization method is adopted to decrease the oscillations of the transient current. Moreover, reactive power can be provided to assist the recovery of the grid voltage. Simulation results show the effectiveness of the proposed control schemes.",
"title": ""
},
{
"docid": "9423718cce01b45c688066f322b2c2aa",
"text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.",
"title": ""
},
{
"docid": "be3fa2fbaaa362aace36d112ff09f94d",
"text": "One of the key objectives in accident data analysis to identify the main factors associated with a road and traffic accident. However, heterogeneous nature of road accident data makes the analysis task difficult. Data segmentation has been used widely to overcome this heterogeneity of the accident data. In this paper, we proposed a framework that used K-modes clustering technique as a preliminary task for segmentation of 11,574 road accidents on road network of Dehradun (India) between 2009 and 2014 (both included). Next, association rule mining are used to identify the various circumstances that are associated with the occurrence of an accident for both the entire data set (EDS) and the clusters identified by K-modes clustering algorithm. The findings of cluster based analysis and entire data set analysis are then compared. The results reveal that the combination of k mode clustering and association rule mining is very inspiring as it produces important information that would remain hidden if no segmentation has been performed prior to generate association rules. Further a trend analysis have also been performed for each clusters and EDS accidents which finds different trends in different cluster whereas a positive trend is shown by EDS. Trend analysis also shows that prior segmentation of accident data is very important before analysis.",
"title": ""
},
{
"docid": "d639525be41a05f1aec5d0637eff79ac",
"text": "We analyze X-COM: UFO Defense and its successful remake XCOM: Enemy Unknown to understand how remakes can repropose a concept across decades, updating most mechanics, and yet retain the dynamic and aesthetic values that defined the original experience. We use gameplay design patterns along with the MDA framework to understand the changes, identifying an unchanged core among a multitude of differences. We argue that two forces polarize the context within which the new game was designed, simultaneously guaranteeing a sameness of experience across the two games and at the same time pushing for radical changes. The first force, which resists the push for an updated experience, can be described as experiential isomorphism, or “sameness of form” in terms of related Gestalt qualities. The second force is generated by the necessity to update the usability of the design, aligning it to a current usability paradigm. We employ game usability heuristics (PLAY) to evaluate aesthetic patterns present in both games, and to understand the implicit vector for change. Our finding is that while patterns on the mechanical and to a slight degree the dynamic levels change between the games, the same aesthetic patterns are present in both, but produced through different means. The method we use offers new understanding of how sequels and remakes of games can change significantly from their originals while still giving rise to similar experiences.",
"title": ""
},
{
"docid": "6998297aeba2e02133a6d62aa94508be",
"text": "License Plate Detection and Recognition System is an image processing technique used to identify a vehicle by its license plate. Here we propose an accurate and robust method of license plate detection and recognition from an image using contour analysis. The system is composed of two phases: the detection of the license plate, and the character recognition. The license plate detection is performed for obtaining the candidate region of the vehicle license plate and determined using the edge based text detection technique. In the recognition phase, the contour analysis is used to recognize the characters after segmenting each character. The performance of the proposed system has been tested on various images and provides better results.",
"title": ""
},
{
"docid": "cded1e50c211f6912efa7f9a63ffd5a7",
"text": "With the proliferation of e-commerce, a large part of online shopping is attributed to impulse buying. Hence, there is a particular necessity to understand impulse buying in the online context. Impulse shoppers incline to feel unable to control their tendencies and behaviors from various stimuli. Specifically, online consumers are both the impulse shoppers and the system users of websites in the purchase process. Impulse shoppers concern individual traits and system users cover the attributes of online stores. Online impulse buying therefore entails two key drivers, technology use and trust belief, and the mediator of flow experience. Grounding on flow experience, technology-use features, and trust belief, this study",
"title": ""
},
{
"docid": "d4c4b66521b85d579f2886790558cb92",
"text": "Remote patient monitoring generates much more data than healthcare professionals are able to manually interpret. Automated detection of events of interest is therefore critical so that these points in the data can be marked for later review. However, for some important chronic health conditions, such as pain and depression, automated detection is only partially achievable. To assist with this problem we developed HealthSense, a framework for real-time tagging of health-related sensor data. HealthSense transmits sensor data from the patient to a server for analysis via machine learning techniques. The system uses patient input to assist with classification of interesting events (e.g., pain or itching). Due to variations between patients, sensors, and condition types, we presume that our initial classification is imperfect and accommodate this by incorporating user feedback into the machine learning process. This is done by occasionally asking the patient whether they are experiencing the condition being monitored. Their response is used to confirm or reject the classification made by the server and continually improve the accuracy of the classifier's decisions on what data is of interest to the health-care provider.",
"title": ""
},
{
"docid": "a42163e2a6625006d04a9b9f6dddf9ce",
"text": "This paper concludes the theme issue on structural health monitoring (SHM) by discussing the concept of damage prognosis (DP). DP attempts to forecast system performance by assessing the current damage state of the system (i.e. SHM), estimating the future loading environments for that system, and predicting through simulation and past experience the remaining useful life of the system. The successful development of a DP capability will require the further development and integration of many technology areas including both measurement/processing/telemetry hardware and a variety of deterministic and probabilistic predictive modelling capabilities, as well as the ability to quantify the uncertainty in these predictions. The multidisciplinary and challenging nature of the DP problem, its current embryonic state of development, and its tremendous potential for life-safety and economic benefits qualify DP as a 'grand challenge' problem for engineers in the twenty-first century.",
"title": ""
},
{
"docid": "326493520ccb5c8db07362f412f57e62",
"text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.",
"title": ""
},
{
"docid": "fdf95905dd8d3d8dcb4388ac921b3eaa",
"text": "Relation classification is associated with many potential applications in the artificial intelligence area. Recent approaches usually leverage neural networks based on structure features such as syntactic or dependency features to solve this problem. However, high-cost structure features make such approaches inconvenient to be directly used. In addition, structure features are probably domaindependent. Therefore, this paper proposes a bidirectional long-short-term-memory recurrent-neuralnetwork (Bi-LSTM-RNN) model based on low-cost sequence features to address relation classification. This model divides a sentence or text segment into five parts, namely two target entities and their three contexts. It learns the representations of entities and their contexts, and uses them to classify relations. We evaluate our model on two standard benchmark datasets in different domains, namely SemEval-2010 Task 8 and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves comparable performance compared with other models using sequence features. In the latter dataset, our model obtains the third best results compared with other models in the official evaluation. Moreover, we find that the context between two target entities plays the most important role in relation classification. Furthermore, statistic experiments show that the context between two target entities can be used as an approximate replacement of the shortest dependency path when dependency parsing is not used.",
"title": ""
},
{
"docid": "119696bc950e1c36fa9d09ee8c1aa6fb",
"text": "A smart grid is an intelligent electricity grid that optimizes the generation, distribution and consumption of electricity through the introduction of Information and Communication Technologies on the electricity grid. In essence, smart grids bring profound changes in the information systems that drive them: new information flows coming from the electricity grid, new players such as decentralized producers of renewable energies, new uses such as electric vehicles and connected houses and new communicating equipments such as smart meters, sensors and remote control points. All this will cause a deluge of data that the energy companies will have to face. Big Data technologies offers suitable solutions for utilities, but the decision about which Big Data technology to use is critical. In this paper, we provide an overview of data management for smart grids, summarise the added value of Big Data technologies for this kind of data, and discuss the technical requirements, the tools and the main steps to implement Big Data solutions in the smart grid context.",
"title": ""
},
{
"docid": "96fe0792e4cf6c88ff7388618eda63ad",
"text": "BACKGROUND\nRecent clinical studies have shown that the dorsal motor nucleus of the vagus nerve is one of the brain areas that are the earliest affected by α-synuclein and Lewy body pathology in Parkinson's disease. This observation raises the question: how the vagus nerve dysfunction affects the dopamine system in the brain?\n\n\nMETHODS\nThe rats underwent surgical implantation of the microchip (MC) in the abdominal region of the vagus. In this study, we examined the effect of chronic, unilateral electrical stimulation of the left nerve vagus, of two different types: low-frequency (MCL) and physiological stimulation (MCPh) on the dopamine and serotonin metabolism determined by high-pressure chromatography with electrochemical detection in rat brain structures.\n\n\nRESULTS\nMCL electrical stimulation of the left nerve vagus in contrast to MCPh stimulation, produced a significant inhibition of dopamine system in rat brain structures. Ex vivo biochemical experiments clearly suggest that MCL opposite to MCPh impaired the function of dopamine system similarly to vagotomy.\n\n\nCONCLUSION\nWe suggest a close relationship between the peripheral vagus nerve impairment and the inhibition of dopamine system in the brain structures. This is the first report of such relationship which may suggest that mental changes (pro-depressive) could occur in the first stage of Parkinson's disease far ahead of motor impairment.",
"title": ""
},
{
"docid": "b07ae3888b52faa598893bbfbf04eae2",
"text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.",
"title": ""
},
{
"docid": "76502e21fbb777a3442928897ef271f0",
"text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract",
"title": ""
},
{
"docid": "cd81ad1c571f9e9a80e2d09582b00f9a",
"text": "OBJECTIVE\nThe biologic basis for gender identity is unknown. Research has shown that the ratio of the length of the second and fourth digits (2D:4D) in mammals is influenced by biologic sex in utero, but data on 2D:4D ratios in transgender individuals are scarce and contradictory. We investigated a possible association between 2D:4D ratio and gender identity in our transgender clinic population in Albany, New York.\n\n\nMETHODS\nWe prospectively recruited 118 transgender subjects undergoing hormonal therapy (50 female to male [FTM] and 68 male to female [MTF]) for finger length measurement. The control group consisted of 37 cisgender volunteers (18 females, 19 males). The length of the second and fourth digits were measured using digital calipers. The 2D:4D ratios were calculated and analyzed with unpaired t tests.\n\n\nRESULTS\nFTM subjects had a smaller dominant hand 2D:4D ratio (0.983 ± 0.027) compared to cisgender female controls (0.998 ± 0.021, P = .029), but a ratio similar to control males (0.972 ± 0.036, P =.19). There was no difference in the 2D:4D ratio of MTF subjects (0.978 ± 0.029) compared to cisgender male controls (0.972 ± 0.036, P = .434).\n\n\nCONCLUSION\nOur findings are consistent with a biologic basis for transgender identity and the possibilities that FTM gender identity is affected by prenatal androgen activity but that MTF transgender identity has a different basis.\n\n\nABBREVIATIONS\n2D:4D = 2nd digit to 4th digit; FTM = female to male; MTF = male to female.",
"title": ""
},
{
"docid": "c103e880e75931398787a8228f3f3e6c",
"text": "The hypothesis that dopamine is important for reward has been proposed in a number of forms, each of which has been challenged. Normally, rewarding stimuli such as food, water, lateral hypothalamic brain stimulation and several drugs of abuse become ineffective as rewards in animals given performance-sparing doses of dopamine antagonists. Dopamine release in the nucleus accumbens has been linked to the efficacy of these unconditioned rewards, but dopamine release in a broader range of structures is implicated in the 'stamping-in' of memory that attaches motivational importance to otherwise neutral environmental stimuli.",
"title": ""
},
{
"docid": "8e4bcddbb8b5de8efb3ab0a32c82ca98",
"text": "Cloud Computing is considered as one of the emerging arenas of computer science in recent times. It is providing excellent facilities to business entrepreneurs by flexible infrastructure. Although, cloud computing is facilitating the Information Technology industry, the research and development in this arena is yet to be satisfactory. Our contribution in this paper is an advanced survey focusing on cloud computing concept and most advanced research issues. This paper provides a better understanding of the cloud computing and identifies important research issues in this burgeoning area of computer science.",
"title": ""
}
] |
scidocsrr
|
9bb36937256e01235372572769288507
|
A Hybrid Model Combining Convolutional Neural Network with XGBoost for Predicting Social Media Popularity
|
[
{
"docid": "28c6fd64958a21c54f931f5eb802c814",
"text": "Time information plays a crucial role on social media popularity. Existing research on popularity prediction, effective though, ignores temporal information which is highly related to user-item associations and thus often results in limited success. An essential way is to consider all these factors (user, item, and time), which capture the dynamic nature of photo popularity. In this paper, we present a novel approach to factorize the popularity into user-item context and time-sensitive context for exploring the mechanism of dynamic popularity. The user-item context provides a holistic view of popularity, while the time-sensitive context captures the temporal dynamics nature of popularity. Accordingly, we develop two kinds of time-sensitive features, including user activeness variability and photo prevalence variability. To predict photo popularity, we propose a novel framework named Multi-scale Temporal Decomposition (MTD), which decomposes the popularity matrix in latent spaces based on contextual associations. Specifically, the proposed MTD models time-sensitive context on different time scales, which is beneficial to automatically learn temporal patterns. Based on the experiments conducted on a real-world dataset with 1.29M photos from Flickr, our proposed MTD can achieve the prediction accuracy of 79.8% and outperform the best three state-of-the-art methods with a relative improvement of 9.6% on average.",
"title": ""
}
] |
[
{
"docid": "0b56f9c9ec0ce1db8dcbfd2830b2536b",
"text": "In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching.",
"title": ""
},
{
"docid": "f6d3157155868f5fafe2533dfd8768b8",
"text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: lozano@decsai.ugr.es (M. Lozano), cgarcia@uco.es (C. GarćıaMart́ınez), fjrodriguez@unex.es (F.J. Rodŕıguez), humberto@ugr.es (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References",
"title": ""
},
{
"docid": "e5d523d8a1f584421dab2eeb269cd303",
"text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.",
"title": ""
},
{
"docid": "4776f37d50709362b6173de58f6badd4",
"text": "Current object recognition systems aim at recognizing numerous object classes under limited supervision conditions. This paper provides a benchmark for evaluating progress on this fundamental task. Several methods have recently proposed to utilize the commonalities between object classes in order to improve generalization accuracy. Such methods can be termed interclass transfer techniques. However, it is currently difficult to asses which of the proposed methods maximally utilizes the shared structure of related classes. In order to facilitate the development, as well as the assessment of methods for dealing with multiple related classes, a new dataset including images of several hundred mammal classes, is provided, together with preliminary results of its use. The images in this dataset are organized into five levels of variability, and their labels include information on the objects’ identity, location and pose. From this dataset, a classification benchmark has been derived, requiring fine distinctions between 72 mammal classes. It is then demonstrated that a recognition method which is highly successful on the Caltech101, attains limited accuracy on the current benchmark (36.5%). Since this method does not utilize the shared structure between classes, the question remains as to whether interclass transfer methods can increase the accuracy to the level of human performance (90%). We suggest that a labeled benchmark of the type provided, containing a large number of related classes is crucial for the development and evaluation of classification methods which make efficient use of interclass transfer.",
"title": ""
},
{
"docid": "efa566cdd4f5fa3cb12a775126377cb5",
"text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.",
"title": ""
},
{
"docid": "e144521f4edf21916991590e173b4cf9",
"text": "We demonstrated a high-yield and easily reproducible synthesis of a highly active oxygen evolution reaction (OER) catalyst, \"the core-oxidized amorphous cobalt phosphide nanostructures\". The rational formation of such core-oxidized amorphous cobalt phosphide nanostructures was accomplished by homogenization, drying, and annealing of a cobalt(II) acetate and sodium hypophosphite mixture taken in the weight ratio of 1:10 in an open atmosphere. Electrocatalytic studies were carried out on the same mixture and in comparison with commercial catalysts, viz., Co3O4-Sigma, NiO-Sigma, and RuO2-Sigma, have shown that our catalyst is superior to all three commercial catalysts in terms of having very low overpotential (287 mV at 10 mA cm-2), lower Tafel slope (0.070 V dec-1), good stability upon constant potential electrolysis, and accelerated degradation tests along with a significantly higher mass activity of 300 A g-1 at an overpotential of 360 mV. The synergism between the amorphous CoxPy shell with the Co3O4 core is attributed to the observed enhancement in the OER performance of our catalyst. Moreover, detailed literature has revealed that our catalyst is superior to most of the earlier reports.",
"title": ""
},
{
"docid": "3380a9a220e553d9f7358739e3f28264",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "8ec7edd2d963501b714be80cb2ea8535",
"text": "The problem of recognizing text in images taken in the wild ha s g ined significant attention from the computer vision community in recent years. The scene text recognition task is more challenging compare d to the traditional problem of recognizing text in printed documents. We focus on this problem, and recognize text extracted from natural sce ne images and the web. Significant attempts have been made to address this p roblem in the recent past, for example [1, 2]. However, many of these wo rks benefit from the availability of strong context, which naturall y imits their applicability. In this work, we present a framework to overc ome these restrictions. Our model introduces a higher order prior com puted from an English dictionary to recognize a word, which may or may not b e a part of the dictionary. We present experimental analysis on stan dard as well as new benchmark datasets. The main contributions of this work are: (1) We present a fram ework, which incorporates higher order statistical language mode ls to recognize words in an unconstrained manner, i.e. we overcome the need for restricted word lists. (2) We achieve significant improvement (more than 20%) in word recognition accuracies in a general setting. (3 ) We introduce a large word recognition dataset (atleast 5 times large r than other public datasets) with character level annotation and bench mark it.",
"title": ""
},
{
"docid": "c4fcd7db5f5ba480d7b3ecc46bef29f6",
"text": "In this paper, we propose an indoor action detection system which can automatically keep the log of users' activities of daily life since each activity generally consists of a number of actions. The hardware setting here adopts top-view depth cameras which makes our system less privacy sensitive and less annoying to the users, too. We regard the series of images of an action as a set of key-poses in images of the interested user which are arranged in a certain temporal order and use the latent SVM framework to jointly learn the appearance of the key-poses and the temporal locations of the key-poses. In this work, two kinds of features are proposed. The first is the histogram of depth difference value which can encode the shape of the human poses. The second is the location-signified feature which can capture the spatial relations among the person, floor, and other static objects. Moreover, we find that some incorrect detection results of certain type of action are usually associated with another certain type of action. Therefore, we design an algorithm that tries to automatically discover the action pairs which are the most difficult to be differentiable, and suppress the incorrect detection outcomes. To validate our system, experiments have been conducted, and the experimental results have shown effectiveness and robustness of our proposed method.",
"title": ""
},
{
"docid": "29c91c8d6f7faed5d23126482a2f553b",
"text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.",
"title": ""
},
{
"docid": "0c8947cbaa2226a024bf3c93541dcae1",
"text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.",
"title": ""
},
{
"docid": "4806b28786af042c23897dbf23802789",
"text": "With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direction: can a separate network be trained to efficiently attack another fully trained network? We demonstrate that it is possible, and that the generated attacks yield startling insights into the weaknesses of the target network. We call such a network an Adversarial Transformation Network (ATN). ATNs transform any input into an adversarial attack on the target network, while being minimally perturbing to the original inputs and the target network’s outputs. Further, we show that ATNs are capable of not only causing the target network to make an error, but can be constructed to explicitly control the type of misclassification made. We demonstrate ATNs on both simple MNISTdigit classifiers and state-of-the-art ImageNet classifiers deployed by Google, Inc.: Inception ResNet-v2. With the resurgence of deep neural networks for many real-world classification tasks, there is an increased interest in methods to assess the weaknesses in the trained models. Adversarial examples are small perturbations of the inputs that are carefully crafted to fool the network into producing incorrect outputs. Seminal work by (Szegedy et al. 2013) and (Goodfellow, Shlens, and Szegedy 2014), as well as much recent work, has shown that adversarial examples are abundant, and that there are many ways to discover them. Given a classifier f(x) : x ∈ X → y ∈ Y and original inputs x ∈ X , the problem of generating untargeted adversarial examples can be expressed as the optimization: argminx∗ L(x,x ∗) s.t. f(x∗) = f(x), where L(·) is a distance metric between examples from the input space (e.g., the L2 norm). Similarly, generating a targeted adversarial attack on a classifier can be expressed as argminx∗ L(x,x ∗) s.t. f(x∗) = yt, where yt ∈ Y is some target label chosen by the attacker. Until now, these optimization problems have been solved using three broad approaches: (1) By directly using optimizers like L-BFGS or Adam (Kingma and Ba 2015), as Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. proposed in (Szegedy et al. 2013) and (Carlini and Wagner 2016). (2) By approximation with single-step gradient-based techniques like fast gradient sign (Goodfellow, Shlens, and Szegedy 2014) or fast least likely class (Kurakin, Goodfellow, and Bengio 2016). (3) By approximation with iterative variants of gradient-based techniques (Kurakin, Goodfellow, and Bengio 2016; Moosavi-Dezfooli et al. 2016; Moosavi-Dezfooli, Fawzi, and Frossard 2016). These approaches use multiple forward and backward passes through the target network to more carefully move an input towards an adversarial classification. Other approaches assume a black-box model and only having access to the target model’s output (Papernot et al. 2016; Baluja, Covell, and Sukthankar 2015; Tramèr et al. 2016). See (Papernot et al. 2015) for a discussion of threat models. Each of the above approaches solved an optimization problem such that a single set of inputs was perturbed enough to force the target network to make a mistake. We take a fundamentally different approach: given a welltrained target network, can we create a separate, attacknetwork that, with high probability, minimally transforms all inputs into ones that will be misclassified? No per-sample optimization problems should be solved. The attack-network should take as input a clean image and output a minimally modified image that will cause a misclassification in the target network. Further, can we do this while imposing strict constraints on the types and amount of perturbations allowed? We introduce a class of networks, called Adversarial Transformation Networks, to efficiently address this task. Adversarial Transformation Networks In this work, we propose Adversarial Transformation Networks (ATNs). An ATN is a neural network that transforms an input into an adversarial example against a target network or set of networks. ATNs may be untargeted or targeted, and trained in a black-box or white-box manner. In this work, we will focus on targeted, white-box ATNs. Formally, an ATN can be defined as a neural network: gf,θ(x) : x ∈ X → x′ (1) where θ is the parameter vector of g, f is the target network which outputs a probability distribution across class labels, and x′ ∼ x, but argmax f(x) = argmax f(x′). The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"title": ""
},
{
"docid": "8dc50e5d77db50332c06684cac3e5c01",
"text": "BACKGROUND\nRhodiola rosea (R. rosea) is grown at high altitudes and northern latitudes. Due to its purported adaptogenic properties, it has been studied for its performance-enhancing capabilities in healthy populations and its therapeutic properties in a number of clinical populations. To systematically review evidence of efficacy and safety of R. rosea for physical and mental fatigue.\n\n\nMETHODS\nSix electronic databases were searched to identify randomized controlled trials (RCTs) and controlled clinical trials (CCTs), evaluating efficacy and safety of R. rosea for physical and mental fatigue. Two reviewers independently screened the identified literature, extracted data and assessed risk of bias for included studies.\n\n\nRESULTS\nOf 206 articles identified in the search, 11 met inclusion criteria for this review. Ten were described as RCTs and one as a CCT. Two of six trials examining physical fatigue in healthy populations report R. rosea to be effective as did three of five RCTs evaluating R. rosea for mental fatigue. All of the included studies exhibit either a high risk of bias or have reporting flaws that hinder assessment of their true validity (unclear risk of bias).\n\n\nCONCLUSION\nResearch regarding R. rosea efficacy is contradictory. While some evidence suggests that the herb may be helpful for enhancing physical performance and alleviating mental fatigue, methodological flaws limit accurate assessment of efficacy. A rigorously-designed well reported RCT that minimizes bias is needed to determine true efficacy of R. rosea for fatigue.",
"title": ""
},
{
"docid": "f8aeaf04486bdbc7254846d95e3cab24",
"text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.",
"title": ""
},
{
"docid": "b3b050c35a1517dc52351cd917d0665a",
"text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-",
"title": ""
},
{
"docid": "dc207fb8426f468dde2cb1d804b33539",
"text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.",
"title": ""
},
{
"docid": "43741bb21c47889b7b0d8de372a4dacd",
"text": "Indoor localization or zonification in disaster affected settings is a challenging research problem. Existing studies encompass localization and tracking of first-responders or fire fighters using wireless sensor networks. In addition to that, fast evacuation, routing, and planning have also been proposed. However, the problem of locating survivors or victims is yet to be explored to the full potential. State-of-the-art literature often employ infrastructure dependent solutions, for example, WiFi localization using WiFi access points exploiting fingerprinting techniques, Pedestrian Dead Reckoning (PDR) starting from known locations, etc. Owing to unpredictable and dynamic nature of disaster affected environments, infrastructure dependent solutions are seldom useful. Therefore, in this study, we propose an ad hoc WiFi zonification technique (named as AWZone) that is independent of any infrastructural settings. AWZone attempts to perform localization through exploiting commodity smartphones as a beaconing device and successively searching and narrowing down the search space. We perform two testbed experiments. The results reveal that, for a single survivor or victim, AWZone can identify the search space and estimate a location with an approximate 1.5m localization error through eliminating incorrect zones from a set of possible results.",
"title": ""
},
{
"docid": "f2b1f83a02f7fa226bb7e515790d98d9",
"text": "Data analytics using machine learning (ML) has become ubiquitous in science, business intelligence, journalism and many other domains. While a lot of work focuses on reducing the training cost, inference runtime and storage cost of ML models, little work studies how to reduce the cost of data acquisition, which potentially leads to a loss of sellers’ revenue and buyers’ affordability and efficiency. In this paper, we propose a model-based pricing (MBP) framework, which instead of pricing the data, directly prices ML model instances. We first formally describe the desired properties of the MBP framework, with a focus on avoiding arbitrage. Next, we show a concrete realization of the MBP framework via a noise injection approach, which provably satisfies the desired formal properties. Based on the proposed framework, we then provide algorithmic solutions on how the seller can assign prices to models under different market scenarios (such as to maximize revenue). Finally, we conduct extensive experiments, which validate that the MBP framework can provide high revenue to the seller, high affordability to the buyer, and also operate on low runtime cost.",
"title": ""
},
{
"docid": "086886072f3ac6908bd47822ce7398d1",
"text": "This paper presents a methodology to accurately record human finger postures during grasping. The main contribution consists of a kinematic model of the human hand reconstructed via magnetic resonance imaging of one subject that (i) is fully parameterized and can be adapted to different subjects, and (ii) is amenable to in-vivo joint angle recordings via optical tracking of markers attached to the skin. The principal novelty here is the introduction of a soft-tissue artifact compensation mechanism that can be optimally calibrated in a systematic way. The high-quality data gathered are employed to study the properties of hand postural synergies in humans, for the sake of ongoing neuroscience investigations. These data are analyzed and some comparisons with similar studies are reported. After a meaningful mapping strategy has been devised, these data could be employed to define robotic hand postures suitable to attain effective grasps, or could be used as prior knowledge in lower-dimensional, real-time avatar hand animation.",
"title": ""
}
] |
scidocsrr
|
d69a327677e66fec2738ecfaea5802f8
|
Unsupervised RGBD Video Object Segmentation Using GANs
|
[
{
"docid": "899e3e436cdaed9efb66b7c9c296ea90",
"text": "Background estimation and foreground segmentation are important steps in many high-level vision tasks. Many existing methods estimate background as a low-rank component and foreground as a sparse matrix without incorporating the structural information. Therefore, these algorithms exhibit degraded performance in the presence of dynamic backgrounds, photometric variations, jitter, shadows, and large occlusions. We observe that these backgrounds often span multiple manifolds. Therefore, constraints that ensure continuity on those manifolds will result in better background estimation. Hence, we propose to incorporate the spatial and temporal sparse subspace clustering into the robust principal component analysis (RPCA) framework. To that end, we compute a spatial and temporal graph for a given sequence using motion-aware correlation coefficient. The information captured by both graphs is utilized by estimating the proximity matrices using both the normalized Euclidean and geodesic distances. The low-rank component must be able to efficiently partition the spatiotemporal graphs using these Laplacian matrices. Embedded with the RPCA objective function, these Laplacian matrices constrain the background model to be spatially and temporally consistent, both on linear and nonlinear manifolds. The solution of the proposed objective function is computed by using the linearized alternating direction method with adaptive penalty optimization scheme. Experiments are performed on challenging sequences from five publicly available datasets and are compared with the 23 existing state-of-the-art methods. The results demonstrate excellent performance of the proposed algorithm for both the background estimation and foreground segmentation.",
"title": ""
}
] |
[
{
"docid": "25b9d86bbeeae349da420edaef200424",
"text": "The plant Stevia rebaudiana is well-known due to the sweet-tasting ent-kaurene diterpenoid glycosides. Stevioside and rebaudioside A are the most abundant and best analyzed, but more than 30 additional steviol glycosides have been described in the scientific literature to date. Most of them were detected in the last two years. This paper reviews these new compounds and provides an overview about novel trends in their determination, separation, analysis, detection, and quantification. The detection and analysis of further constituents such as nonglycosidic diterpenes, flavonoids, chlorogenic acids, vitamins, nutrients, and miscellaneous minor compounds in the leaves of Stevia rebaudiana are reviewed as well. A critical review of the antioxidant capacity of Stevia leaves and its analysis is also included. These different aspects are discussed in consideration of the scientific literature of the last 10 years.",
"title": ""
},
{
"docid": "d82c1a529aa8e059834bc487fcfebd24",
"text": "Web attacks are nowadays one of the major threats on the Internet, and several studies have analyzed them, providing details on how they are performed and how they spread. However, no study seems to have sufficiently analyzed the typical behavior of an attacker after a website has been",
"title": ""
},
{
"docid": "e947cf1b4670c10f2453b9012078c3b5",
"text": "BACKGROUND\nDyadic suicide pacts are cases in which two individuals (and very rarely more) agree to die together. These account for fewer than 1% of all completed suicides.\n\n\nOBJECTIVE\nThe authors describe two men in a long-term domestic partnership who entered into a suicide pact and, despite utilizing a high-lethality method (simultaneous arm amputation with a power saw), survived.\n\n\nMETHOD\nThe authors investigated the psychiatric, psychological, and social causes of suicide pacts by delving into the history of these two participants, who displayed a very high degree of suicidal intent. Psychiatric interviews and a family conference call, along with the strong support of one patient's family, were elicited.\n\n\nRESULTS\nThe patients, both HIV-positive, showed high levels of depression and hopelessness, as well as social isolation and financial hardship. With the support of his family, one patient was discharged to their care, while the other partner was hospitalized pending reunion with his partner.\n\n\nDISCUSSION\nThis case illustrates many of the key, defining features of suicide pacts that are carried out and also highlights the nature of the dependency relationship.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "a6247333d00afb3b79cb93c2a036062b",
"text": "Privacy decision making can be surprising or even appear contradictory: we feel entitled to protection of information about ourselves that we do not control, yet willingly trade away the same information for small rewards; we worry about privacy invasions of little significance, yet overlook those that may cause significant damages. Dichotomies between attitudes and behaviors, inconsistencies in discounting future costs or rewards, and other systematic behavioral biases have long been studied in the psychology and behavioral economics literatures. In this paper we draw from those literatures to discuss the role of uncertainty, ambiguity, and behavioral biases in privacy decision making.",
"title": ""
},
{
"docid": "6f66eebbe5408c3f4d5118b639fcfec0",
"text": "Various types of incidents and disasters cause huge loss to people's lives and property every year and highlight the need to improve our capabilities to handle natural, health, and manmade emergencies. How to develop emergency management systems that can provide critical decision support to emergency management personnel is considered a crucial issue by researchers and practitioners. Governments, such as the USA, the European Commission, and China, have recognized the importance of emergency management and funded national level emergency management projects during the past decade. Multi-criteria decision making (MCDM) refers to the study of methods and procedures by which concerns about multiple and often competing criteria can be formally incorporated into the management planning process. Over the years, it has evolved as an important field of Operations Research, focusing on issues as: analyzing and evaluating of incompatible criteria and alternatives; modeling decision makers' preferences; developing MCDM-based decision support systems; designing MCDM research paradigm; identifying compromising solutions of multi-criteria decision making problems. İn emergency management, MCDM can be used to evaluate decision alternatives and assist decision makers in making immediate and effective responses under pressures and uncertainties. However, although various approaches and technologies have been developed in the MCDM field to handle decision problems with conflicting criteria in some domains, effective decision support in emergency management requires in depth analysis of current MCDM methods and techniques, and adaptation of these techniques specifically for emergency management. In terms of this basic fact, the guest editors determined that the purpose of this special issue should be “to assess the current state of knowledge about MCDM in emergency management and to generate and throw open for discussion, more ideas, hypotheses and theories, the specific objective being to determine directions for further research”. For this purpose, this special issue presents some new progress about MCDM in emergency management that is expected to trigger thought and deepen further research. For this purpose, 11 papers [1–11] were selected from 41 submissions related to MCDM in emergency management from different countries and regions. All the selected papers went through a standard review process of the journal and the authors of all the papers made necessary revision in terms of reviewing comments. In the selected 11 papers, they can be divided into three categories. The first category focuses on innovative MCDM methods for logistics management, which includes 3 papers. The first paper written by Liberatore et al. [1] is to propose a hierarchical compromise model called RecHADS method for the joint optimization of recovery operations and distribution of emergency goods in humanitarian logistics. In the second paper, Peng et al. [2] applies a system dynamics disruption analysis approach for inventory and logistics planning in the post-seismic supply chain risk management. In the third paper, Rath and Gutjahr [3] present an exact solution method and a mathheuristic method to solve the warehouse location routing problem in disaster relief and obtained the good performance. In the second category, 4 papers about the MCDM-based risk assessment and risk decision-making methods in emergency response and emergency management are selected. In terms of the previous order, the fourth paper [4] is to integrate TODIM method and FSE method to formulate a new TODIM-FSE method for risk decision-making support in oil spill response. The fifth paper [5] is to utilize a fault tree analysis (FTA) method to give a risk decision-making solution to emergency response, especially in the case of the H1N1 infectious diseases. Similarly, the sixth paper [6] focuses on an analytic network process (ANP) method for risk assessment and decision analysis, and while the seventh paper [7] applies cumulative prospect theory (CPT) method to risk decision analysis in emergency response. The papers in the third category emphasize on the MCDM methods for disaster assessment and emergence management and four papers are included into this category. In the similar order, the eighth paper [8] is to propose a multi-event and multi-criteria method to evaluate the situation of disaster resilience. In the ninth paper, Kou et al. [9] develop an integrated expert system for fast disaster assessment and obtain the good evaluation performance. Similarly, the 10th paper [10] proposes a multi-objective programming approach to make the optimal decisions for oil-importing plan considering country risk with extreme events. Finally, the last paper [11] in this special issue is to develop a community-based collaborative information system to manage natural and manmade disasters. The guest editors hope that the papers published in this special issue would be of value to academic researchers and business practitioners and would provide a clearer sense of direction for further research, as well as facilitating use of existing methodologies in a more productive manner. The guest editors would like to place on record their sincere thanks to Prof. Stefan Nickel, the Editor-in-Chief of Computers & Operations Research, for this very special opportunity provided to us for contributing to this special issue. The guest editors have to thank all the referees for their kind support and help. Last, but not least, the guest editors would express the gratitude to all authors of submissions in this special issue for their contribution. Without the support of the authors and the referees, it would have been",
"title": ""
},
{
"docid": "fdc16a2774921124576c8399de2701d4",
"text": "This paper discusses a method of frequency-shift keying (FSK) demodulation and Manchester-bit decoding using a digital signal processing (DSP) approach. The demodulator is implemented on a single-channel high-speed digital radio board. The board architecture contains a high-speed A/D converter, a digital receiver chip, a host DSP processing chip, and a back-end D/A converter [2]. The demodulator software is booted off an on-board EPROM and run on the DSP chip [3]. The algorithm accepts complex digital baseband data available from the front-end digital receiver chip [2]. The target FSK modulation is assumed to be in the RF range (VHF or UHF signals). A block diagram of the single-channel digital radio is shown in Figure 1 [2].",
"title": ""
},
{
"docid": "6ec83bd04d6af27355d5906ca81c9d8f",
"text": "Perhaps a few words might be inserted here to avoid In parametric curve interpolation, the choice of the any possible confusion. In the usual function interpolation interpolating nodes makes a great deal of difference in the resulting curve. Uniform parametrization is generally setting, the problem is of the form P~ = (x, y~) where the x~ are increasing, and one seeks a real-valued unsatisfactory. It is often suggested that a good choice polynomial y = y(x) so that y(x~)= y~. This is identical of nodes is the cumulative chord length parametrization. to the vector-valued polynomial Examples presented here, however, show that this is not so. Heuristic reasoning based on a physical analogy leads P(x) = (x, y(x)) to a third parametrization, (the \"centripetal model'), which almost invariably results in better shapes than with x as the parameter, except with the important either the chord length or the uniform parametrization. distinction that here the interpolating conditions As with the previous two methods, this method is \"global'and is 'invariant\" under similarity transformations, y(x~) = y~ are (It turns out that, in some sense, the method has been anticipated in a paper by Hosaka and Kimura.) P(x~) = P~, 0 <~ i <~ n",
"title": ""
},
{
"docid": "64817e403b2d80b96bc7ad4a4e456e41",
"text": "The concept of resilience has evolved considerably since Holling’s (1973) seminal paper. Different interpretations of what is meant by resilience, however, cause confusion. Resilience of a system needs to be considered in terms of the attributes that govern the system’s dynamics. Three related attributes of social– ecological systems (SESs) determine their future trajectories: resilience, adaptability, and transformability. Resilience (the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks) has four components—latitude, resistance, precariousness, and panarchy—most readily portrayed using the metaphor of a stability landscape. Adaptability is the capacity of actors in the system to influence resilience (in a SES, essentially to manage it). There are four general ways in which this can be done, corresponding to the four aspects of resilience. Transformability is the capacity to create a fundamentally new system when ecological, economic, or social structures make the existing system untenable. The implications of this interpretation of SES dynamics for sustainability science include changing the focus from seeking optimal states and the determinants of maximum sustainable yield (the MSY paradigm), to resilience analysis, adaptive resource management, and adaptive governance. INTRODUCTION An inherent difficulty in the application of these concepts is that, by their nature, they are rather imprecise. They fall into the same sort of category as “justice” or “wellbeing,” and it can be counterproductive to seek definitions that are too narrow. Because different groups adopt different interpretations to fit their understanding and purpose, however, there is confusion in their use. The confusion then extends to how a resilience approach (Holling 1973, Gunderson and Holling 2002) can contribute to the goals of sustainable development. In what follows, we provide an interpretation and an explanation of how these concepts are reflected in the adaptive cycles of complex, multi-scalar SESs. We need a better scientific basis for sustainable development than is generally applied (e.g., a new “sustainability science”). The “Consortium for Sustainable Development” (of the International Council for Science, the Initiative on Science and Technology for Sustainability, and the Third World Academy of Science), the US National Research Council (1999, 2002), and the Millennium Ecosystem Assessment (2003), have all focused increasing attention on such notions as robustness, vulnerability, and risk. There is good reason for this, as it is these characteristics of social–ecological systems (SESs) that will determine their ability to adapt to and benefit from change. In particular, the stability dynamics of all linked systems of humans and nature emerge from three complementary attributes: resilience, adaptability, and transformability. The purpose of this paper is to examine these three attributes; what they mean, how they interact, and their implications for our future well-being. There is little fundamentally new theory in this paper. What is new is that it uses established theory of nonlinear stability (Levin 1999, Scheffer et al. 2001, Gunderson and Holling 2002, Berkes et al. 2003) to clarify, explain, and diagnose known examples of regional development, regional poverty, and regional CSIRO Sustainable Ecosystems; University of Wisconsin-Madison; Arizona State University Ecology and Society 9(2): 5. http://www.ecologyandsociety.org/vol9/iss2/art5 sustainability. These include, among others, the Everglades and the Wisconsin Northern Highlands Lake District in the USA, rangelands and an agricultural catchment in southeastern Australia, the semi-arid savanna in southeastern Zimbabwe, the Kristianstad “Water Kingdom” in southern Sweden, and the Mae Ping valley in northern Thailand. These regions provide examples of both successes and failures of development. Some from rich countries have generated several pulses of solutions over a span of a hundred years and have generated huge costs of recovery (the Everglades). Some from poor countries have emerged in a transformed way but then, in some cases, have been dragged back by higher-level autocratic regimes (Zimbabwe). Some began as localscale solutions and then developed as transformations across scales from local to regional (Kristianstad and northern Wisconsin). In all of them, the outcomes were determined by the interplay of their resilience, adaptability, and transformability. There is a major distinction between resilience and adaptability, on the one hand, and transformability on the other. Resilience and adaptability have to do with the dynamics of a particular system, or a closely related set of systems. Transformability refers to fundamentally altering the nature of a system. As with many terms under the resilience rubric, the dividing line between “closely related” and “fundamentally altered” can be fuzzy, and subject to interpretation. So we begin by first offering the most general, qualitative set of definitions, without reference to conceptual frameworks, that can be used to describe these terms. We then use some examples and the literature on “basins of attraction” and “stability landscapes” to further refine our definitions. Before giving the definitions, however, we need to briefly introduce the concept of adaptive cycles. Adaptive Cycles and Cross-scale Effects The dynamics of SESs can be usefully described and analyzed in terms of a cycle, known as an adaptive cycle, that passes through four phases. Two of them— a growth and exploitation phase (r) merging into a conservation phase (K)—comprise a slow, cumulative forward loop of the cycle, during which the dynamics of the system are reasonably predictable. As the K phase continues, resources become increasingly locked up and the system becomes progressively less flexible and responsive to external shocks. It is eventually, inevitably, followed by a chaotic collapse and release phase (Ω) that rapidly gives way to a phase of reorganization (α), which may be rapid or slow, and during which, innovation and new opportunities are possible. The Ω and α phases together comprise an unpredictable backloop. The α phase leads into a subsequent r phase, which may resemble the previous r phase or be significantly different. This metaphor of the adaptive cycle is based on observed system changes, and does not imply fixed, regular cycling. Systems can move back from K toward r, or from r directly into Ω, or back from α to Ω. Finally (and importantly), the cycles occur at a number of scales and SESs exist as “panarchies”— adaptive cycles interacting across multiple scales. These cross-scale effects are of great significance in the dynamics of SESs.",
"title": ""
},
{
"docid": "6b3abd92478a641d992ed4f4f08f52d5",
"text": "In this article, we consider the robust estimation of a location parameter using Mestimators. We propose here to couple this estimation with the robust scale estimate proposed in [Dahyot and Wilson, 2006]. The resulting procedure is then completely unsupervised. It is applied to camera motion estimation and moving object detection in videos. Experimental results on different video materials show the adaptability and the accuracy of this new robust approach.",
"title": ""
},
{
"docid": "2343e18c8a36bc7da6357086c10f43d4",
"text": "Sensor networks offer a powerful combination of distributed sensing, computing and communication. They lend themselves to countless applications and, at the same time, offer numerous challenges due to their peculiarities, primarily the stringent energy constraints to which sensing nodes are typically subjected. The distinguishing traits of sensor networks have a direct impact on the hardware design of the nodes at at least four levels: power source, processor, communication hardware, and sensors. Various hardware platforms have already been designed to test the many ideas spawned by the research community and to implement applications to virtually all fields of science and technology. We are convinced that CAS will be able to provide a substantial contribution to the development of this exciting field.",
"title": ""
},
{
"docid": "e84ff3f37e049bd649a327366a4605f9",
"text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.",
"title": ""
},
{
"docid": "5f01cb5c34ac9182f6485f70d19101db",
"text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.",
"title": ""
},
{
"docid": "7b5f90b4b0b11ffdb25ececb2eaf56f6",
"text": "The human ABO(H) blood group phenotypes arise from the evolutionarily oldest genetic system found in primate populations. While the blood group antigen A is considered the ancestral primordial structure, under the selective pressure of life-threatening diseases blood group O(H) came to dominate as the most frequently occurring blood group worldwide. Non-O(H) phenotypes demonstrate impaired formation of adaptive and innate immunoglobulin specificities due to clonal selection and phenotype formation in plasma proteins. Compared with individuals with blood group O(H), blood group A individuals not only have a significantly higher risk of developing certain types of cancer but also exhibit high susceptibility to malaria tropica or infection by Plasmodium falciparum. The phenotype-determining blood group A glycotransferase(s), which affect the levels of anti-A/Tn cross-reactive immunoglobulins in phenotypic glycosidic accommodation, might also mediate adhesion and entry of the parasite to host cells via trans-species O-GalNAc glycosylation of abundantly expressed serine residues that arise throughout the parasite's life cycle, while excluding the possibility of antibody formation against the resulting hybrid Tn antigen. In contrast, human blood group O(H), lacking this enzyme, is indicated to confer a survival advantage regarding the overall risk of developing cancer, and individuals with this blood group rarely develop life-threatening infections involving evolutionarily selective malaria strains.",
"title": ""
},
{
"docid": "7dfa83dcdd2885e0818ff32a3741af4a",
"text": "Physical unclonable functions (PUFs) exploit the physical characteristics of the silicon and the IC manufacturing process variations to uniquely characterize each and every silicon chip. Since it is practically impossible to model, copy, or control the IC manufacturing process variations, PUFs not only make these chips unique, but also effectively unclonable. Exploiting the inherent variations in the IC manufacturing process, PUFs provide a secure, robust, low cost mechanism to authenticate silicon chips. This makes PUFs attractive for RFID ICs where cost and security are the key requirements. In this paper we present the design and implementation of PUF enabled \"unclonable\" RFIDs. The PUF-enabled RFID has been fabricated in 0.18 mu technology, and extensive testing results demonstrate that PUFs can securely authenticate an RFID with minimal overheads. We also highlight the advantages of PUF based RFIDs in anti-counterfeiting and security applications.",
"title": ""
},
{
"docid": "db207eb0d5896c2aad1f8485bc597e45",
"text": "One of the serious obstacles to the applications of speech emotion recognition systems in real-life settings is the lack of generalization of the emotion classifiers. Many recognition systems often present a dramatic drop in performance when tested on speech data obtained from different speakers, acoustic environments, linguistic content, and domain conditions. In this letter, we propose a novel unsupervised domain adaptation model, called Universum autoencoders, to improve the performance of the systems evaluated in mismatched training and test conditions. To address the mismatch, our proposed model not only learns discriminative information from labeled data, but also learns to incorporate the prior knowledge from unlabeled data into the learning. Experimental results on the labeled Geneva Whispered Emotion Corpus database plus other three unlabeled databases demonstrate the effectiveness of the proposed method when compared to other domain adaptation methods.",
"title": ""
},
{
"docid": "7f0a2510e2f9d23fe5058bf5fa826b59",
"text": "This paper presents the progress of acoustic models for lowresourced languages (Assamese, Bengali, Haitian Creole, Lao, Zulu) developed within the second evaluation campaign of the IARPA Babel project. This year, the main focus of the project is put on training high-performing automatic speech recognition (ASR) and keyword search (KWS) systems from language resources limited to about 10 hours of transcribed speech data. Optimizing the structure of Multilayer Perceptron (MLP) based feature extraction and switching from the sigmoid activation function to rectified linear units results in about 5% relative improvement over baseline MLP features. Further improvements are obtained when the MLPs are trained on multiple feature streams and by exploiting label preserving data augmentation techniques like vocal tract length perturbation. Systematic application of these methods allows to improve the unilingual systems by 4-6% absolute in WER and 0.064-0.105 absolute in MTWV. Transfer and adaptation of multilingually trained MLPs lead to additional gains, clearly exceeding the project goal of 0.3 MTWV even when only the limited language pack of the target language is used.",
"title": ""
},
{
"docid": "868501b6dc57751b7a6416d91217f0bd",
"text": "OBJECTIVE\nThe major aim of this research is to determine whether infants who were anxiously/resistantly attached in infancy develop more anxiety disorders during childhood and adolescence than infants who were securely attached. To test different theories of anxiety disorders, newborn temperament and maternal anxiety were included in multiple regression analyses.\n\n\nMETHOD\nInfants participated in Ainsworth's Strange Situation Procedure at 12 months of age. The Schedule for Affective Disorders and Schizophrenia for School-Age Children was administered to the 172 children when they reached 17.5 years of age. Maternal anxiety and infant temperament were assessed near the time of birth.\n\n\nRESULTS\nThe hypothesized relation between anxious/resistant attachment and later anxiety disorders was confirmed. No relations with maternal anxiety and the variables indexing temperament were discovered, except for a composite score of nurses' ratings designed to access \"high reactivity,\" and the Neonatal Behavioral Assessment Scale clusters of newborn range of state and inability to habituate to stimuli. Anxious/resistant attachment continued to significantly predict child/adolescent anxiety disorders, even when entered last, after maternal anxiety and temperament, in multiple regression analyses.\n\n\nCONCLUSION\nThe attachment relationship appears to play an important role in the development of anxiety disorders. Newborn temperament may also contribute.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
},
{
"docid": "175d7462d86eae131358c005a32ecdab",
"text": "Software architectures are often constructed through a series of design decisions. In particular, architectural tactics are selected to satisfy specific quality concerns such as reliability, performance, and security. However, the knowledge of these tactical decisions is often lost, resulting in a gradual degradation of architectural quality as developers modify the code without fully understanding the underlying architectural decisions. In this paper we present a machine learning approach for discovering and visualizing architectural tactics in code, mapping these code segments to tactic traceability patterns, and monitoring sensitive areas of the code for modification events in order to provide users with up-to-date information about underlying architectural concerns. Our approach utilizes a customized classifier which is trained using code extracted from fifty performance-centric and safety-critical open source software systems. Its performance is compared against seven off-the-shelf classifiers. In a controlled experiment all classifiers performed well; however our tactic detector outperformed the other classifiers when used within the larger context of the Hadoop Distributed File System. We further demonstrate the viability of our approach for using the automatically detected tactics to generate viable and informative messages in a simulation of maintenance events mined from Hadoop's change management system.",
"title": ""
}
] |
scidocsrr
|
91ab696f5e61dc96c29f139ec789202b
|
Detection of traffic signs in real-world images: The German traffic sign detection benchmark
|
[
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
}
] |
[
{
"docid": "388f4a555c7aa004f081cbdc6bc0f799",
"text": "We present a multi-GPU version of GPUSPH, a CUDA implementation of fluid-dynamics models based on the smoothed particle hydrodynamics (SPH) numerical method. The SPH is a well-known Lagrangian model for the simulation of free-surface fluid flows; it exposes a high degree of parallelism and has already been successfully ported to GPU. We extend the GPU-based simulator to run simulations on multiple GPUs simultaneously, to obtain a gain in speed and overcome the memory limitations of using a single device. The computational domain is spatially split with minimal overlapping and shared volume slices are updated at every iteration of the simulation. Data transfers are asynchronous with computations, thus completely covering the overhead introduced by slice exchange. A simple yet effective load balancing policy preserves the performance in case of unbalanced simulations due to asymmetric fluid topologies. The obtained speedup factor (up to 4.5x for 6 GPUs) closely follows the expected one (5x for 6 GPUs) and it is possible to run simulations with a higher number of particles than would fit on a single device. We use the Karp-Flatt metric to formally estimate the overall efficiency of the parallelization.",
"title": ""
},
{
"docid": "78321a0af7f5ab76809c6f7d08f2c15a",
"text": "The mass media are ranked with respect to their perceived helpfulness in satisfying clusters of needs arising from social roles and individual dispositions. For example, integration into the sociopolitical order is best served by newspaper; while \"knowing oneself \" is best served by books. Cinema and books are more helpful as means of \"escape\" than is television. Primary relations, holidays and other cultural activities are often more important than the mass media in satisfying needs. Television is the least specialized medium, serving many different personal and political needs. The \"interchangeability\" of the media over a variety of functions orders televisions, radio, newspapers, books, and cinema in a circumplex. We speculate about which attributes of the media explain the social and psychological needs they serve best. The data, drawn from an Israeli survey, are presented as a basis for cross-cultural comparison. Disciplines Communication | Social and Behavioral Sciences This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/267 ON THE USE OF THE MASS MEDIA FOR IMPORTANT THINGS * ELIHU KATZ MICHAEL GUREVITCH",
"title": ""
},
{
"docid": "1afe9ff72d69e09c24a11187ea7dca2d",
"text": "In the Intelligent Robotics Laboratory (IRL) at Vanderbilt University we seek to develop service robots with a high level of social intelligence and interactivity. In order to achieve this goal, we have identified two main issues for research. The first issue is how to achieve a high level of interaction between the human and the robot. This has lead to the formulation of our philosophy of Human Directed Local Autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The motivation for integrating humans into a service robot system is to take advantage of human intelligence and skill. Human intelligence can be used to interpret robot sensor data, eliminating computationally expensive and possibly error-prone automated analyses. Human skill is a valuable resource for trajectory and path planning as well as for simplifying the search process. In this paper we present our plans for integrating humans into a service robot system. We present our paradigm for human/robot interaction, HuDL. The second issue is the general problem of system integration, with a specific focus on integrating humans into the service robotic system. This work has lead to the development of the Intelligent Machine Architecture (IMA), a novel software architecture that has been specifically designed to simplify the integration of the many diverse algorithms, sensors, and actuators necessary for socially intelligent service robots. Our testbed system is described, and some example applications of HuDL for aids to the physically disabled are given. An evaluation of the effectiveness of the IMA is also presented.",
"title": ""
},
{
"docid": "62769e2979d1a1181ffebedc18f3783a",
"text": "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the transhumanist dogma that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. Preliminaries Substrate-independence is a common assumption in the philosophy of mind. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium; silicon-based processors inside a computer could in principle do the trick as well. Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall take it as a given here. The argument we shall present does not, however, depend on any strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true analytic (either analytically or metaphysically) just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations (including passing Turing tests etc.). We only need the weaker assumption that it would suffice (for generation of subjective experiences) if the computational processes of a human brain were structurally replicated in suitably fine-grained detail, such as on the level of individual neurons. This highly attenuated version of substrate-independence is widely accepted. At the current stage of technology, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Several authors argue that this stage may be only a few decades away (Drexler 1985; Bostrom 1998; Kurzweil 1999; Moravec 1999). Yet for present purposes we need not make any assumptions about the time-scale. The argument we shall present works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. Since we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those theoretical constraints’ that in our current understanding limit the information processing density that can be attained in a given lump of matter. But we can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10 instructions per second (Drexler 1992). Another author gives a rough performance estimate of 10 Are You Living In a Computer Simulation? 2 operations per second for a computer with a mass on order of large planet (Bradbury 2000). The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we already understand (contrast enhancement in the retina), yields a figure of ~10 operations per second for the entire human brain (Moravee 1989). An alternative estimate, based the number of synapses in the brain and their firing frequency gives a figure of ~10l6-10l7 operations per second (Bostrom 1998). Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dentritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its components. One would therefore expect a substantial increase in efficiency when using more reliable and versatile non-biological processors. If the environment is included in the simulation, this will require additional computing power. How much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible (unless radically new physics is discovered). But in order to get a realistic simulation of human experience, much less is needed — only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations indeed: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated. Microscopic phenomena could likely be filled in on an ad hoc basis. What you see when you look in an electron microscope needs to look unsuspicious, but you have usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we set up systems that are designed to harness unobserved microscopic phenomena operating according to known principles to get results that we are able to independently verify. The paradigmatic instance is computers. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. But this is no big problem, since our current computing power is negligible by posthuman standards. In general, the posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times, Thus, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director can skip back a few seconds and rerun the simulation in a way that avoids the problem. It thus seems plausible that the main computational cost consists in simulating organic brains down to the neuronal or sub-neuronal level (although as we build more and faster computers, the cost of simulating our machines might eventually come to dominate the cost of simulating nervous systems). While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use,~1O-10 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for the argument we are pursuing here. We noted that a rough approximation of the computational power of a single planetary-mass computer is 10 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. Such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) in less than 10 seconds. (A posthuman civilization may eventually build an astronomical number of such computers.) We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error 3 Bostrom (2001): Are You Living In a Computer Simulation? in all our guesstimates. • Posthuman civilizations would have enough computing power to run hugely many ancestorsimulations even while using only a tiny fraction of their resources for that purpose. The Simulation Argument The core of the argument that this paper presents can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestorsimulations, then how come you are not living in such a simulation? We shall develop this idea into a rigorous argument. Let us introduce the following notation: DOOM: Humanity goes extinct before reaching the posthuman stage SIM: You are living in a simulation N: Average number of ancestor-simulations run by a posthuman civilization H: Average number of individuals that have lived in a civilization before it reaches a posthuman stage The expected fraction of all observers with human-type experiences that live in simulations is then fsim= [l-P(DOOM)]×N×H ([1-P(DOOM)]×N×H",
"title": ""
},
{
"docid": "e7b1d82b6716434da8bbeeeec895dac4",
"text": "Grapevine is the one of the most important fruit species in the world. Comparative genome sequencing of grape cultivars is very important for the interpretation of the grape genome and understanding its evolution. The genomes of four Georgian grape cultivars—Chkhaveri, Saperavi, Meskhetian green, and Rkatsiteli, belonging to different haplogroups, were resequenced. The shotgun genomic libraries of grape cultivars were sequenced on an Illumina HiSeq. Pinot Noir nuclear, mitochondrial, and chloroplast DNA were used as reference. Mitochondrial DNA of Chkhaveri closely matches that of the reference Pinot noir mitochondrial DNA, with the exception of 16 SNPs found in the Chkhaveri mitochondrial DNA. The number of SNPs in mitochondrial DNA from Saperavi, Meskhetian green, and Rkatsiteli was 764, 702, and 822, respectively. Nuclear DNA differs from the reference by 1,800,675 nt in Chkhaveri, 1,063,063 nt in Meskhetian green, 2,174,995 in Saperavi, and 5,011,513 in Rkatsiteli. Unlike mtDNA Pinot noir, chromosomal DNA is closer to the Meskhetian green than to other cultivars. Substantial differences in the number of SNPs in mitochondrial and nuclear DNA of Chkhaveri and Pinot noir cultivars are explained by backcrossing or introgression of their wild predecessors before or during the process of domestication. Annotation of chromosomal DNA of Georgian grape cultivars by MEGANTE, a web-based annotation system, shows 66,745 predicted genes (Chkhaveri—17,409; Saperavi—17,021; Meskhetian green—18,355; and Rkatsiteli—13,960). Among them, 106 predicted genes and 43 pseudogenes of terpene synthase genes were found in chromosomes 12, 18 random (18R), and 19. Four novel TPS genes not present in reference Pinot noir DNA were detected. Two of them—germacrene A synthase (Chromosome 18R) and (−) germacrene D synthase (Chromosome 19) can be identified as putatively full-length proteins. This work performs the first attempt of the comparative whole genome analysis of different haplogroups of Vitis vinifera cultivars. Based on complete nuclear and mitochondrial DNA sequence analysis, hypothetical phylogeny scheme of formation of grape cultivars is presented.",
"title": ""
},
{
"docid": "23ffb68125a7f1deb27062acc262701e",
"text": "All metropolitan cities face traffic congestion problems especially in the downtown areas. Normal cities can be transformed into “smart cities” by exploiting the information and communication technologies (ICT). The paradigm of Internet of Thing (IoT) can play an important role in realization of smart cities. This paper proposes an IoT based traffic management solutions for smart cities where traffic flow can be dynamically controlled by onsite traffic officers through their smart phones or can be centrally monitored or controlled through Internet. We have used the example of the holy city of Makkah Saudi Arabia, where the traffic behavior changes dynamically due to the continuous visitation of the pilgrims throughout the year. Therefore, Makkah city requires special traffic controlling algorithms other than the prevailing traffic control systems. However the scheme proposed is general and can be used in any Metropolitan city without the loss of generality.",
"title": ""
},
{
"docid": "4b432e49485b57ddb1921478f2917d4b",
"text": "Dynamic perturbations of reaching movements are an important technique for studying motor learning and adaptation. Adaptation to non-contacting, velocity-dependent inertial Coriolis forces generated by arm movements during passive body rotation is very rapid, and when complete the Coriolis forces are no longer sensed. Adaptation to velocity-dependent forces delivered by a robotic manipulandum takes longer and the perturbations continue to be perceived even when adaptation is complete. These differences reflect adaptive self-calibration of motor control versus learning the behavior of an external object or 'tool'. Velocity-dependent inertial Coriolis forces also arise in everyday behavior during voluntary turn and reach movements but because of anticipatory feedforward motor compensations do not affect movement accuracy despite being larger than the velocity-dependent forces typically used in experimental studies. Progress has been made in understanding: the common features that determine adaptive responses to velocity-dependent perturbations of jaw and limb movements; the transfer of adaptation to mechanical perturbations across different contact sites on a limb; and the parcellation and separate representation of the static and dynamic components of multiforce perturbations.",
"title": ""
},
{
"docid": "21e4781748e33fbf2ea14482691b4107",
"text": "Ageing causes a decline in the function of human skin, while factors such as medical conditions, drugs and environmental irritants add to the compromised skin and predispose it to certain conditions. Superimposed on the changes of physiological ageing are changes characterised by chronic sun exposure. Skin neoplasia, whether benign, premalignant or malignant, is more common in the elderly. It is important to identify benign conditions, as it is crucial that lesions with a malignant potential be recognised so that timeous treatment can prevent serious malignancies. Ultraviolet radiation is the major aetiologic factor for the development of skin cancer. Pruritic conditions result from a combination of a declining barrier function and the effects of environmental irritants. Pruritus due to scabies is common in institutionalised older persons. Infective conditions as a result of a combination of altered immunity, predisposing medical conditions (e.g. diabetes) and a variety of drugs used to treat these conditions may affect immune function and homeostasis. Regular scrutiny of the skin will ensure early identification of problems and implementation of a good skin care plan can compensate for failing physiologic function. INTRODUCTION Population ageing and a drive to maintain a youthful appearance have spurred research on the physiological processes of skin ageing. An important outcome of these efforts has been greater insight into skin cancer. Tremendous strides have also been made in the last decade in understanding the molecular basis of ageing. The free radical theory states that ageing results from an accumulation of cellular damage caused by excess reactive oxygen species generated by oxidative metabolism.",
"title": ""
},
{
"docid": "aa8ae1fc471c46b5803bfa1303cb7001",
"text": "It is widely recognized that steganography with sideinformation in the form of a precover at the sender enjoys significantly higher empirical security than other embedding schemes. Despite the success of side-informed steganography, current designs are purely heuristic and little has been done to develop the embedding rule from first principles. Building upon the recently proposed MiPOD steganography, in this paper we impose multivariate Gaussian model on acquisition noise and estimate its parameters from the available precover. The embedding is then designed to minimize the KL divergence between cover and stego distributions. In contrast to existing heuristic algorithms that modulate the embedding costs by 1–2|e|, where e is the rounding error, in our model-based approach the sender should modulate the steganographic Fisher information, which is a loose equivalent of embedding costs, by (1–2|e|)^2. Experiments with uncompressed and JPEG images show promise of this theoretically well-founded approach. Introduction Steganography is a privacy tool in which messages are embedded in inconspicuous cover objects to hide the very presence of the communicated secret. Digital media, such as images, video, and audio are particularly suitable cover sources because of their ubiquity and the fact that they contain random components, the acquisition noise. On the other hand, digital media files are extremely complex objects that are notoriously hard to describe with sufficiently accurate and estimable statistical models. This is the main reason for why current steganography in such empirical sources [3] lacks perfect security and heavily relies on heuristics, such as embedding “costs” and intuitive modulation factors. Similarly, practical steganalysis resorts to increasingly more complex high-dimensional descriptors (rich models) and advanced machine learning paradigms, including ensemble classifiers and deep learning. Often, a digital media object is subjected to processing and/or format conversion prior to embedding the secret. The last step in the processing pipeline is typically quantization. In side-informed steganography with precover [21], the sender makes use of the unquantized cover values during embedding to hide data in a more secure manner. The first embedding scheme of this type described in the literature is the embedding-while-dithering [14] in which the secret message was embedded by perturbing the process of color quantization and dithering when converting a true-color image to a palette format. Perturbed quantization [15] started another direction in which rounding errors of DCT coefficients during JPEG compression were used to modify the embedding algorithm. This method has been advanced through a series of papers [23, 24, 29, 20], culminating with approaches based on advanced coding techniques with a high level of empirical security [19, 18, 6]. Side-information can have many other forms. Instead of one precover, the sender may have access to the acquisition oracle (a camera) and take multiple images of the same scene. These multiple exposures can be used to estimate the acquisition noise and also incorporated during embedding. This research direction has been developed to a lesser degree compared to steganography with precover most likely due to the difficulty of acquiring the required imagery and modeling the differences between acquisitions. In a series of papers [10, 12, 11], Franz et al. proposed a method in which multiple scans of the same printed image on a flat-bed scanner were used to estimate the model of the acquisition noise at every pixel. This requires acquiring a potentially large number of scans, which makes this approach rather labor intensive. Moreover, differences in the movement of the scanner head between individual scans lead to slight spatial misalignment that complicates using this type of side-information properly. Recently, the authors of [7] showed how multiple JPEG images of the same scene can be used to infer the preferred direction of embedding changes. By working with quantized DCT coefficients instead of pixels, the embedding is less sensitive to small differences between multiple acquisitions. Despite the success of side-informed schemes, there appears to be an alarming lack of theoretical analysis that would either justify the heuristics or suggest a well-founded (and hopefully more powerful) approach. In [13], the author has shown that the precover compensates for the lack of the cover model. In particular, for a Gaussian model of acquisition noise, precover-informed rounding is more secure than embedding designed to preserve the cover model estimated from the precover image assuming the cover is “sufficiently non-stationary.” Another direction worth mentioning in this context is the bottom-up model-based approach recently proposed by Bas [2]. The author showed that a high-capacity steganographic scheme with a rather low empirical detectability can be constructed when the process of digitally developing a RAW sensor capture is sufficiently simplified. The impact of embedding is masked as an increased level of photonic noise, e.g., due to a higher ISO setting. It will likely be rather difficult, however, to extend this approach to realistic processing pipelines. Inspired by the success of the multivariate Gaussian model in steganography for digital images [25, 17, 26], in this paper we adopt the same model for the precover and then derive the embedding rule to minimize the KL divergence between cover and stego distributions. The sideinformation is used to estimate the parameters of the acquisition noise and the noise-free scene. In the next section, we review current state of the art in heuristic side-informed steganography with precover. In the following section, we introduce a formal model of image acquisition. In Section “Side-informed steganography with MVG acquisition noise”, we describe the proposed model-based embedding method, which is related to heuristic approaches in Section “Connection to heuristic schemes.” The main bulk of results from experiments on images represented in the spatial and JPEG domain appear in Section “Experiments.” In the subsequent section, we investigate whether the public part of the selection channel, the content adaptivity, can be incorporated in selection-channel-aware variants of steganalysis features to improve detection of side-informed schemes. The paper is then closed with Conclusions. The following notation is adopted for technical arguments. Matrices and vectors will be typeset in boldface, while capital letters are reserved for random variables with the corresponding lower case symbols used for their realizations. In this paper, we only work with grayscale cover images. Precover values will be denoted with xij ∈ R, while cover and stego values will be integer arrays cij and sij , 1 ≤ i ≤ n1, 1 ≤ j ≤ n2, respectively. The symbols [x], dxe, and bxc are used for rounding and rounding up and down the value of x. By N (μ,σ2), we understand Gaussian distribution with mean μ and variance σ2. The complementary cumulative distribution function of a standard normal variable (the tail probability) will be denoted Q(x) = ∫∞ x (2π)−1/2 exp ( −z2/2 ) dz. Finally, we say that f(x)≈ g(x) when limx→∞ f(x)/g(x) = 1. Prior art in side-informed steganography with precover All modern steganographic schemes, including those that use side-information, are implemented within the paradigm of distortion minimization. First, each cover element cij is assigned a “cost” ρij that measures the impact on detectability should that element be modified during embedding. The payload is then embedded while minimizing the sum of costs of all changed cover elements, ∑ cij 6=sij ρij . A steganographic scheme that embeds with the minimal expected cost changes each cover element with probability βij = exp(−λρij) 1 +exp(−λρij) , (1) if the embedding operation is constrained to be binary, and βij = exp(−λρij) 1 +2exp(−λρij) , (2) for a ternary scheme with equal costs of changing cij to cij ± 1. Syndrome-trellis codes [8] can be used to build practical embedding schemes that operate near the rate–distortion bound. For steganography designed to minimize costs (embedding distortion), a popular heuristic to incorporate a precover value xij during embedding is to modulate the costs based on the rounding error eij = cij − xij , −1/2≤ eij ≤ 1/2 [23, 29, 20, 18, 19, 6, 24]. A binary embedding scheme modulates the cost of changing cij = [xij ] to [xij ] + sign(eij) by 1−2|eij |, while prohibiting the change to [xij ]− sign(eij): ρij(sign(eij)) = (1−2|eij |)ρij (3) ρij(−sign(eij)) = Ω, (4) where ρij(u) is the cost of modifying the cover value by u∈ {−1,1}, ρij are costs of some additive embedding scheme, and Ω is a large constant. This modulation can be justified heuristically because when |eij | ≈ 1/2, a small perturbation of xij could cause cij to be rounded to the other side. Such coefficients are thus assigned a proportionally smaller cost because 1− 2|eij | ≈ 0. On the other hand, the costs are unchanged when eij ≈ 0, as it takes a larger perturbation of the precover to change the rounded value. A ternary version of this embedding strategy [6] allows modifications both ways with costs: ρij(sign(eij)) = (1−2|eij |)ρij (5) ρij(−sign(eij)) = ρij . (6) Some embedding schemes do not use costs and, instead, minimize statistical detectability. In MiPOD [25], the embedding probabilities βij are derived from their impact on the cover multivariate Gaussian model by solving the following equation for each pixel ij: βijIij = λ ln 1−2βij βij , (7) where Iij = 2/σ̂4 ij is the Fisher information with σ̂ 2 ij an estimated variance of the acquisition noise at pixel ij, and λ is a Lagrange multiplier determined by the payload size. To incorporate the side-information, the sender first converts the embedding probabilities into costs and then modulates them as in (3) or (5). This can be done b",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "054e75aeec7a8f49f385bb543ba70bd7",
"text": "BACKGROUND/AIMS\nMindfulness-based stress reduction (MBSR) has enhanced cognition, positive emotion, and immunity in younger and middle-aged samples; its benefits are less well known for older persons. Here we report on a randomized controlled trial of MBSR for older adults and its effects on executive function, left frontal asymmetry of the EEG alpha band, and antibody response.\n\n\nMETHODS\nOlder adults (n = 201) were randomized to MBSR or waiting list control. The outcome measures were: the Trail Making Test part B/A (Trails B/A) ratio, a measure of executive function; changes in left frontal alpha asymmetry, an indicator of positive emotions or approach motivation; depression, mindfulness, and perceived stress scores, and the immunoglobulin G response to a protein antigen, a measure of adaptive immunity.\n\n\nRESULTS\nMBSR participants had a lower Trails B/A ratio immediately after intervention (p < 0.05); reduced shift to rightward frontal alpha activation after intervention (p = 0.03); higher baseline antibody levels after intervention (p < 0.01), but lower antibody responses 24 weeks after antigen challenge (p < 0.04), and improved mindfulness after intervention (p = 0.023) and at 21 weeks of follow-up (p = 0.006).\n\n\nCONCLUSIONS\nMBSR produced small but significant changes in executive function, mindfulness, and sustained left frontal alpha asymmetry. The antibody findings at follow-up were unexpected. Further study of the effects of MBSR on immune function should assess changes in antibody responses in comparison to T-cell-mediated effector functions, which decline as a function of age.",
"title": ""
},
{
"docid": "817c86340f641094f5811f5f073c4c8b",
"text": "This paper presents a region-based shape controller for a swarm of robots. In this control method, the robotsmove as a group inside a desired regionwhilemaintaining aminimumdistance among themselves. Various shapes of the desired region can be formed by choosing the appropriate objective functions. The robots in the group only need to communicate with their neighbors and not the entire community. The robots do not have specific identities or roles within the group. Therefore, the proposed method does not require specific orders or positions of the robots inside the region and yet different formations can be formed for a swarm of robots. A Lyapunov-like function is presented for convergence analysis of the multi-robot systems. Simulation results illustrate the performance of the proposed controller. © 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b32218abeff9a34c3e89eac76b8c6a45",
"text": "The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.",
"title": ""
},
{
"docid": "ea7381f641c13efef1b9d838cd0a3b62",
"text": "We provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers. According to existing work, adversarial attacks identify weakly correlated or non-predictive features learned by the classifier during training and design the adversarial noise to utilize these features. Therefore, highly predictive features should be used first during classification in order to determine the set of possible output labels. Our methodology focuses the problem of designing resilient classifiers into a problem of designing resilient feature extractors for these highly predictive features. We provide two theorems, which support our methodology. The Serial Composition Resilience and Parallel Composition Resilience theorems show that the output of adversarially resilient feature extractors can be combined to create an equally resilient classifier. Based on our theoretical results, we outline the design of an adversarially resilient classifier.",
"title": ""
},
{
"docid": "824767fbfc389f5a2da52aa179a325d2",
"text": "We present a real-time algorithm to estimate the 3D pose of a previously unseen face from a single range image. Based on a novel shape signature to identify noses in range images, we generate candidates for their positions, and then generate and evaluate many pose hypotheses in parallel using modern graphics processing units (GPUs). We developed a novel error function that compares the input range image to precomputed pose images of an average face model. The algorithm is robust to large pose variations of plusmn90deg yaw, plusmn45deg pitch and plusmn30deg roll rotation, facial expression, partial occlusion, and works for multiple faces in the field of view. It correctly estimates 97.8% of the poses within yaw and pitch error of 15deg at 55.8 fps. To evaluate the algorithm, we built a database of range images with large pose variations and developed a method for automatic ground truth annotation.",
"title": ""
},
{
"docid": "d5fc7535bcf4bfc55da11d5c569950b3",
"text": "The way information spreads through society has changed significantly over the past decade with the advent of online social networking. Twitter, one of the most widely used social networking websites, is known as the real-time, public microblogging network where news breaks first. Most users love it for its iconic 140-character limitation and unfiltered feed that show them news and opinions in the form of tweets. Tweets are usually multilingual in nature and of varying quality. However, machine translation (MT) of twitter data is a challenging task especially due to the following two reasons: (i) tweets are informal in nature (i.e., violates linguistic norms), and (ii) parallel resource for twitter data is scarcely available on the Internet. In this paper, we develop FooTweets, a first parallel corpus of tweets for English–German language pair. We extract 4, 000 English tweets from the FIFA 2014 world cup and manually translate them into German with a special focus on the informal nature of the tweets. In addition to this, we also annotate sentiment scores between 0 and 1 to all the tweets depending upon the degree of sentiment associated with them. This data has recently been used to build sentiment translation engines and an extensive evaluation revealed that such a resource is very useful in machine translation of user generated content.",
"title": ""
},
{
"docid": "80adf87179f4b3b61bf99d946da4cb2a",
"text": "In modern intensive care units (ICUs) a vast and varied amount of physiological data is measured and collected, with the intent of providing clinicians with detailed information about the physiological state of each patient. The data include measurements from the bedside monitors of heavily instrumented patients, imaging studies, laboratory test results, and clinical observations. The clinician’s task of integrating and interpreting the data, however, is complicated by the sheer volume of information and the challenges of organizing it appropriately. This task is made even more difficult by ICU patients’ frequently-changing physiological state. Although the extensive clinical information collected in ICUs presents a challenge, it also opens up several opportunities. In particular, we believe that physiologically-based computational models and model-based estimation methods can be harnessed to better understand and track patient state. These methods would integrate a patient’s hemodynamic data streams by analyzing and interpreting the available information, and presenting resultant pathophysiological hypotheses to the clinical staff in an efficient manner. In this thesis, such a possibility is developed in the context of cardiovascular dynamics. The central results of this thesis concern averaged models of cardiovascular dynamics and a novel estimation method for continuously tracking cardiac output and total peripheral resistance. This method exploits both intra-beat and inter-beat dynamics of arterial blood pressure, and incorporates a parametrized model of arterial compliance. We validated our method with animal data from laboratory experiments and ICU patient data. The resulting root-mean-square-normalized errors – at most 15% depending on the data set – are quite low and clinically acceptable. In addition, we describe a novel estimation scheme for continuously monitoring left ventricular ejection fraction and left ventricular end-diastolic volume. We validated this method on an animal data set. Again, the resulting root-mean-square-normalized errors were quite low – at most 13%. By continuously monitoring cardiac output, total peripheral resistance, left ventricular ejection fraction, left ventricular end-diastolic volume, and arterial blood pressure, one has the basis for distinguishing between cardiogenic, hypovolemic, and septic shock. We hope that the results in this thesis will contribute to the development of a next-generation patient monitoring system. Thesis Supervisor: Professor George C. Verghese Title: Professor of Electrical Engineering Thesis Supervisor: Dr. Thomas Heldt Title: Postdoctoral Associate",
"title": ""
},
{
"docid": "004076658f9c2c8c63ac9628765b3be7",
"text": "Graph matching is widely used in a variety of scientific fields, including computer vision, due to its powerful performance, robustness, and generality. Its computational complexity, however, limits the permissible size of input graphs in practice. Therefore, in real-world applications, the initial construction of graphs to match becomes a critical factor for the matching performance, and often leads to unsatisfactory results. In this paper, to resolve the issue, we propose a novel progressive framework which combines probabilistic progression of graphs with matching of graphs. The algorithm efficiently re-estimates in a Bayesian manner the most plausible target graphs based on the current matching result, and guarantees to boost the matching objective at the subsequent graph matching. Experimental evaluation demonstrates that our approach effectively handles the limits of conventional graph matching and achieves significant improvement in challenging image matching problems.",
"title": ""
},
{
"docid": "c9766e95df62d747f5640b3cab412a3f",
"text": "For the last 10 years, interest has grown in low frequency shear waves that propagate in the human body. However, the generation of shear waves by acoustic vibrators is a relatively complex problem, and the directivity patterns of shear waves produced by the usual vibrators are more complicated than those obtained for longitudinal ultrasonic transducers. To extract shear modulus parameters from the shear wave propagation in soft tissues, it is important to understand and to optimize the directivity pattern of shear wave vibrators. This paper is devoted to a careful study of the theoretical and the experimental directivity pattern produced by a point source in soft tissues. Both theoretical and experimental measurements show that the directivity pattern of a point source vibrator presents two very strong lobes for an angle around 35/spl deg/. This paper also points out the impact of the near field in the problem of shear wave generation.",
"title": ""
}
] |
scidocsrr
|
3d910ea01a9c4a46891775e32cb84cda
|
Modeling and control of a tilt tri-rotor airplane
|
[
{
"docid": "d47312fd20b8098878fd6a22176bf246",
"text": "The pneumatic artificial muscle (PAM) is undoubtedly the most promising artificial muscle for the actuation of new types of industrial robots such as rubber actuator and PAM manipulators because it provides these advantages such as high strength and high power/ weight ratio, low cost, compactness, ease of maintenance, cleanliness, readily available and cheap power source, inherent safety and mobility assistance to humans performing tasks. However, some limitations still exist, such as the air compressibility and the lack of damping ability of the actuator bring the dynamic delay of the pressure response and cause the oscillatory motion. In addition, the nonlinearities in the PAM manipulator still limit the controllability. Therefore, it is not easy to realize motion with high accuracy and high speed and with respect to various external inertia loads in order to realize a human-friendly therapy robot. To overcome these problems a novel controller which harmonizes a phase plane switching control method (PPSC) with conventional PID controller and the adaptabilities of neural network, is newly proposed. In order to realize satisfactory control performance a variable damper, magneto-rheological brake (MRB), is equipped to the joint of the manipulator. Superb mixture of conventional PID controller and an intelligent phase plane switching control using neural network (IPPSC) brings us a novel controller. The experiments were carried out in practical PAM manipulator and the effectiveness of the proposed control algorithm was demonstrated through experiments, which had proved that the stability of the manipulator can be improved greatly in a high gain control by using MRB with IPPSC and without regard for the changes of external inertia loads. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "cf2c454ca26842bdfd5be017d07c13a8",
"text": "We report on solid-state mesoscopic heterojunction solar cells employing nanoparticles (NPs) of methyl ammonium lead iodide (CH(3)NH(3))PbI(3) as light harvesters. The perovskite NPs were produced by reaction of methylammonium iodide with PbI(2) and deposited onto a submicron-thick mesoscopic TiO(2) film, whose pores were infiltrated with the hole-conductor spiro-MeOTAD. Illumination with standard AM-1.5 sunlight generated large photocurrents (J(SC)) exceeding 17 mA/cm(2), an open circuit photovoltage (V(OC)) of 0.888 V and a fill factor (FF) of 0.62 yielding a power conversion efficiency (PCE) of 9.7%, the highest reported to date for such cells. Femto second laser studies combined with photo-induced absorption measurements showed charge separation to proceed via hole injection from the excited (CH(3)NH(3))PbI(3) NPs into the spiro-MeOTAD followed by electron transfer to the mesoscopic TiO(2) film. The use of a solid hole conductor dramatically improved the device stability compared to (CH(3)NH(3))PbI(3) -sensitized liquid junction cells.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "7627b49945a3e9d27061c2080b9eb632",
"text": "The microgrid concept has been closely investigated and implemented by numerous experts worldwide. The first part of this paper describes the principles of microgrid design, considering the operational concepts and requirements arising from participation in active network management. Over the last several years, efforts to standardize microgrids have been made, and it is in terms of these advances that the current paper proposes the application of IEC/ISO 62264 standards to microgrids and Virtual Power Plants, along with a comprehensive review of microgrids, including advanced control techniques, energy storage systems, and market participation in both island and grid-connection operation. Finally, control techniques and the principles of energy-storage systems are summarized in a comprehensive flowchart.",
"title": ""
},
{
"docid": "19607c362f07ebe0238e5940fefdf03f",
"text": "This paper presents an approach for generating photorealistic video sequences of dynamically varying facial expressions in human-agent interactions. To this end, we study human-human interactions to model the relationship and influence of one individual's facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial models, wherein the first stage generates a dynamically varying sequence of the agent's face sketch conditioned on facial expression features derived from the interacting human partner. This serves as an intermediate representation, which is used to condition a second stage generative model to synthesize high-quality video of the agent face. Our approach uses a novel L1 regularization term computed from layer features of the discriminator, which are integrated with the generator objective in the GAN model. Session constraints are also imposed on video frame generation to ensure appearance consistency between consecutive frames. We demonstrated that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that agent facial expressions in the generated video clips reflect valid emotional reactions to behavior of the human partner.",
"title": ""
},
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "b09dda9822d4e111d2fa135f09f705d9",
"text": "Unifying seemingly disparate algorithmic ideas to produce better performing algorithms has been a longstanding goal in reinforcement learning. As a primary example, TD(λ) elegantly unifies one-step TD prediction with Monte Carlo methods through the use of eligibility traces and the tracedecay parameter λ. Currently, there are a multitude of algorithms that can be used to perform TD control, including Sarsa, Q-learning, and Expected Sarsa. These methods are often studied in the one-step case, but they can be extended across multiple time steps to achieve better performance. Each of these algorithms is seemingly distinct, and no one dominates the others for all problems. In this paper, we study a new multi-step action-value algorithm called Q(σ) that unifies and generalizes these existing algorithms, while subsuming them as special cases. A new parameter, σ, is introduced to allow the degree of sampling performed by the algorithm at each step during its backup to be continuously varied, with Sarsa existing at one extreme (full sampling), and Expected Sarsa existing at the other (pure expectation). Q(σ) is generally applicable to both onand off-policy learning, but in this work we focus on experiments in the on-policy case. Our results show that an intermediate value of σ, which results in a mixture of the existing algorithms, performs better than either extreme. The mixture can also be varied dynamically which can result in even greater performance. The Landscape of TD Algorithms Temporal-difference (TD) methods (Sutton and Barto 1998) are an important concept in reinforcement learning (RL) that combines ideas from Monte Carlo and dynamic programming methods. TD methods allow learning to occur directly from raw experience in the absence of a model of the environment’s dynamics, like with Monte Carlo methods, while also allowing estimates to be updated based on other learned estimates without waiting for a final result, like with dynamic programming. The core concepts of TD methods provide a flexible framework for creating a variety of powerful algorithms that can be used for both prediction and control. There are a number of TD control methods that have been proposed. Q-learning (Watkins 1989; Watkins and Dayan Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Authors contributed equally, and are listed alphabetically. 1992) is arguably the most popular, and is considered an offpolicy method because the policy generating the behaviour (the behaviour policy), and the policy that is being learned (the target policy) are different. Sarsa (Rummery and Niranjan 1994; Sutton 1996) is the classical on-policy control method, where the behaviour and target policies are the same. However, Sarsa can be extended to learn off-policy with the use of importance sampling (Precup, Sutton, and Singh 2000). Expected Sarsa is an extension of Sarsa that, instead of using the action-value of the next state to update the value of the current state, uses the expectation of all the subsequent action-values of the current state with respect to the target policy. Expected Sarsa has been studied as a strictly on-policy method (van Seijen et al. 2009), but in this paper we present a more general version that can be used for both onand off-policy learning and that also subsumes Q-learning. All of these methods are often described in the simple one-step case, but they can also be extended across multiple time steps. The TD(λ) algorithm unifies one-step TD learning with Monte Carlo methods (Sutton 1988). Through the use of eligibility traces, and the trace-decay parameter, λ ∈ [0, 1], a spectrum of algorithms is created. At one end, λ = 1, exists Monte Carlo methods, and at the other, λ = 0, exists onestep TD learning. In the middle of the spectrum are intermediate methods which can perform better than the methods at either extreme (Sutton and Barto 1998). The concept of eligibility traces can also be applied to TD control methods such as Sarsa and Q-learning, which can create more efficient learning and produce better performance (Rummery 1995). Multi-step TD methods are usually thought of in terms of an average of many multi-step returns of differing lengths and are often associated with eligibility traces, as is the case with TD(λ). However, it is also natural to think of them in terms of individual n-step returns with their associated n-step backup (Sutton and Barto 1998). We refer to each of these individual backups as atomic backups, whereas the combination of several atomic backups of different lengths creates a compound backup. In the existing literature, it is not clear how best to extend one-step Expected Sarsa to a multi-step algorithm. The Tree-backup algorithm was originally presented as a method to perform off-policy evaluation when the behaviour policy The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"title": ""
},
{
"docid": "0cd5813a069c8955871784cd3e63aa83",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "2fe33171bc57e5b78ce4dafb30f7d427",
"text": "In this paper, we propose a volume visualization system that accepts direct manipulation through a sketch-based What You See Is What You Get (WYSIWYG) approach. Similar to the operations in painting applications for 2D images, in our system, a full set of tools have been developed to enable direct volume rendering manipulation of color, transparency, contrast, brightness, and other optical properties by brushing a few strokes on top of the rendered volume image. To be able to smartly identify the targeted features of the volume, our system matches the sparse sketching input with the clustered features both in image space and volume space. To achieve interactivity, both special algorithms to accelerate the input identification and feature matching have been developed and implemented in our system. Without resorting to tuning transfer function parameters, our proposed system accepts sparse stroke inputs and provides users with intuitive, flexible and effective interaction during volume data exploration and visualization.",
"title": ""
},
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "3ddac782fd9797771505a4a46b849b45",
"text": "A number of studies have found that mortality rates are positively correlated with income inequality across the cities and states of the US. We argue that this correlation is confounded by the effects of racial composition. Across states and Metropolitan Statistical Areas (MSAs), the fraction of the population that is black is positively correlated with average white incomes, and negatively correlated with average black incomes. Between-group income inequality is therefore higher where the fraction black is higher, as is income inequality in general. Conditional on the fraction black, neither city nor state mortality rates are correlated with income inequality. Mortality rates are higher where the fraction black is higher, not only because of the mechanical effect of higher black mortality rates and lower black incomes, but because white mortality rates are higher in places where the fraction black is higher. This result is present within census regions, and for all age groups and both sexes (except for boys aged 1-9). It is robust to conditioning on income, education, and (in the MSA results) on state fixed effects. Although it remains unclear why white mortality is related to racial composition, the mechanism working through trust that is often proposed to explain the effects of inequality on health is also consistent with the evidence on racial composition and mortality.",
"title": ""
},
{
"docid": "a8c4e25f6e2e6ec45c8f57e07c2a41c0",
"text": "We describe the design and control of a wearable robotic device powered by pneumatic artificial muscle actuators for use in ankle-foot rehabilitation. The design is inspired by the biological musculoskeletal system of the human foot and lower leg, mimicking the morphology and the functionality of the biological muscle-tendon-ligament structure. A key feature of the device is its soft structure that provides active assistance without restricting natural degrees of freedom at the ankle joint. Four pneumatic artificial muscles assist dorsiflexion and plantarflexion as well as inversion and eversion. The prototype is also equipped with various embedded sensors for gait pattern analysis. For the subject tested, the prototype is capable of generating an ankle range of motion of 27° (14° dorsiflexion and 13° plantarflexion). The controllability of the system is experimentally demonstrated using a linear time-invariant (LTI) controller. The controller is found using an identified LTI model of the system, resulting from the interaction of the soft orthotic device with a human leg, and model-based classical control design techniques. The suitability of the proposed control strategy is demonstrated with several angle-reference following experiments.",
"title": ""
},
{
"docid": "96fe16eae39c862109503f44eef69c59",
"text": "Steganography and steganalysis are the prominent research fields in information hiding paradigm. Steganography is the science of invisible communication while steganalysis is the detection of steganography. Steganography means “covered writing” that hides the existence of the message itself. Digital steganography provides potential for private and secure communication that has become the necessity of most of the applications in today’s world. Various multimedia carriers such as audio, text, video, image can act as cover media to carry secret information. In this paper, we have focused only on image steganography. This article provides a review of fundamental concepts, evaluation measures and security aspects of steganography system, various spatial and transform domain embedding schemes. In addition, image quality metrics that can be used for evaluation of stego images and cover selection measures that provide additional security to embedding scheme are also highlighted. Current research trends and directions to improve on existing methods are suggested. c ⃝ 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "ab27f462123f255d39c8383bd70005c6",
"text": "In this paper, a detailed description of a synchronous field-programmable gate array implementation of a bilateral filter for image processing is given. The bilateral filter is chosen for one unique reason: It reduces noise while preserving details. The design is described on register-transfer level. The distinctive feature of our design concept consists of changing the clock domain in a manner that kernel-based processing is possible, which means the processing of the entire filter window at one pixel clock cycle. This feature of the kernel-based design is supported by the arrangement of the input data into groups so that the internal clock of the design is a multiple of the pixel clock given by a targeted system. Additionally, by the exploitation of the separability and the symmetry of one filter component, the complexity of the design is widely reduced. Combining these features, the bilateral filter is implemented as a highly parallelized pipeline structure with very economical and effective utilization of dedicated resources. Due to the modularity of the filter design, kernels of different sizes can be implemented with low effort using our design and given instructions for scaling. As the original form of the bilateral filter with no approximations or modifications is implemented, the resulting image quality depends on the chosen filter parameters only. Due to the quantization of the filter coefficients, only negligible quality loss is introduced.",
"title": ""
},
{
"docid": "0bfdaec2f1122425df4a1a7a25242c6c",
"text": "In the broad context, an understanding of the role of intimacy has been discussed as essential to the development of a science of interpersonal relationships (Hinde, 1978). Specifically, the absence of an intimate confiding relationship has been identified as a vulnerability factor in the onset of depression under adverse circumstances (Brown & Harris, 1978 a; Costello, 1982; Solomon & Bromet, 1982). Aneshensel & Stone (1982) suggest that, assuming that lack of perceived social support is not just a manifestation of depression itself, lack of intimacy may contribute to the creation of depressive symptoms independent of life events. Henderson et al. (1981) suggest that the perceived availability and adequacy with which significant others meet an individual's requirements for attachment play a small but significant role in the onset of non-psychotic emotional illness under adverse circumstances. Waring et al. (1981a) found that deficiencies of marital intimacy were significantly associated with the presence of symptoms of non-psychotic emotional illness in one or both spouses. The significance of these important studies is dependent on the reliability and validity of the methods of measuring intimacy and the conceptual definition of intimacy. Several conceptual and methodological issues raised by these studies merit a closer examination. The studies described above have largely employed structured sociological interviews, containing questions about the quality of personal relationships. These questions allow a rating of the depth of'intimacy' by a trained interviewer. One question which arises about the concept of intimacy evaluated in these interviews i s : ' Whose intimacy is being measured? Hers? His? Or theirs?' The possibility exists that the perception of relationships as unavailable or inadequate might be a product of an individual's attitudes or moods. On the other hand, they might be a valid expression of how others have behaved towards the respondent. Data which record the spouse's or partner's perceptions of the relationship might be essential to evaluate the theoretical possibility that perceived differences in intimacy may have a different impact on vulnerability to emotional illness from congruent perception. A second question is whether narrow operational definitions of intimacy or broad definitions of the concept are to be preferred (Schaefer & Olson, 1981). Several authors have addressed this second conceptual and methodological question (Tennant & Bebbington, 1978; Tennant et al. 1982; Shapiro, 1979). They assert that the variables of marital status and the quality of the marital relationships have been confounded. Brown & Harris (1978 ft) reply that, although these variables are interlinked, they are not confounded: many unmarried women had intimate relationships with their boyfriends and many married women were unable to confide in their husbands. Separate analysis of the data for the unmarried and the married might be instructive. Thirdly, these sociological measures involved no discussion as to whether scores for partners should be analysed separately or as joint scores on their combined level of interpersonal intimacy in comparison with general population means or standards. Finally, recent evidence suggests that intimacy can be defined as a multifaceted dimension of interpersonal relationships which may or may not have been defined too narrowly by several questions regarding confiding in a spouse and availability and/or adequacy of close relationships (Schaefer & Olson, 1981; Waring et al. 1980). Recent studies suggest that a stronger correlation between depression and deficiencies of marital intimacy than that originally reported by Brown & Harris may be found by broader definitions of intimacy (Waring & Patton, 1984). Although the",
"title": ""
},
{
"docid": "60c03017f7254c28ba61348d301ae612",
"text": "Code flaws or vulnerabilities are prevalent in software systems and can potentially cause a variety of problems including deadlock, information loss, or system failure. A variety of approaches have been developed to try and detect the most likely locations of such code vulnerabilities in large code bases. Most of them rely on manually designing features (e.g. complexity metrics or frequencies of code tokens) that represent the characteristics of the code. However, all suffer from challenges in sufficiently capturing both semantic and syntactic representation of source code, an important capability for building accurate prediction models. In this paper, we describe a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features in code. Our evaluation on 18 Android applications demonstrates that the prediction power obtained from our learned features is equal or even superior to what is achieved by state of the art vulnerability prediction models: 3%–58% improvement for within-project prediction and 85% for cross-project prediction.",
"title": ""
},
{
"docid": "8ecff926eb721366be5b8032a3f65eea",
"text": "Object detection is a well-studied topic, however detection of small objects still lacks attention. Detecting small objects has been difficult due to small sizes, occlusion and complex backgrounds. Small objects detection is important in a number of applications including detection of small insects. One application is spider detection and removal. Spiders are frequently found on grapes and broccolis sold at supermarkets and this poses a significant safety issue and generates negative publicity for the industry. In this paper, we present a fine-tuned VGG16 network for detection of small objects such as spiders. Furthermore, we introduce a simple technique called “feature activation mapping” for object visualization from VGG16 feature maps. The testing accuracy of our network on tiny spiders with various backgrounds is 84%, as compared to 72% using fined-tuned Faster R-CNN and 95.32% using CAM. Even though our feature activation mapping technique has a mid-range of test accuracy, it provides more detailed shape and size of spiders than using CAM which is important for the application area. A data set for spider detection is made available online.",
"title": ""
},
{
"docid": "97c162261666f145da6e81d2aa9a8343",
"text": "Shape optimization is a growing field of interest in many areas of academic research, marine design, and manufacturing. As part of the CREATE Ships Hydromechanics Product, an effort is underway to develop a computational tool set and process framework that can aid the ship designer in making informed decisions regarding the influence of the planned hull shape on its hydrodynamic characteristics, even at the earliest stages where decisions can have significant cost implications. The major goal of this effort is to utilize the increasing experience gained in using these methods to assess shape optimization techniques and how they might impact design for current and future naval ships. Additionally, this effort is aimed at establishing an optimization framework within the bounds of a collaborative design environment that will result in improved performance and better understanding of preliminary ship designs at an early stage. The initial effort demonstrated here is aimed at ship resistance, and examples are shown for full ship and localized bow dome shaping related to the Joint High Speed Sealift (JHSS) hull concept. Introduction Any ship design inherently involves optimization, as competing requirements and design parameters force the design to evolve, and as designers strive to deliver the most effective and efficient platform possible within the constraints of time, budget, and performance requirements. A significant number of applications of computational fluid dynamics (CFD) tools to hydrodynamic optimization, mostly for reducing calm-water drag and wave patterns, demonstrate a growing interest in optimization. In addition, more recent ship design programs within the US Navy illustrate some fundamental changes in mission and performance requirements, and future ship designs may be radically different from current ships in the fleet. One difficulty with designing such new concepts is the lack of experience from which to draw from when performing design studies; thus, optimization techniques may be particularly useful. These issues point to a need for greater fidelity, robustness, and ease of use in the tools used in early stage ship design. The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program attempts to address this in its plan to develop and deploy sets of computational engineering design and analysis tools. It is expected that advances in computers will allow for highly accurate design and analyses studies that can be carried out throughout the design process. In order to evaluate candidate designs and explore the design space more thoroughly shape optimization is an important component of the CREATE Ships Hydromechanics Product. The current program development plan includes fast parameterized codes to bound the design space and more accurate Reynolds-Averaged Navier-Stokes (RANS) codes to better define the geometry and performance of the specified hull forms. The potential for hydrodynamic shape optimization has been demonstrated for a variety of different hull forms, including multi-hulls, in related efforts (see e.g., Wilson et al, 2009, Stern et al, Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "686abc74c0a34c90755d20c0ffc63eb2",
"text": "Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional t tests) when certainty in the estimate is high (unlike Bayesian model comparison using Bayes factors). The method also yields precise estimates of statistical power for various research goals. The software and programs are free and run on Macintosh, Windows, and Linux platforms.",
"title": ""
},
{
"docid": "f0d7cb8abf7e5522582dd56e55f4e77e",
"text": "Mental health is an urgent global issue. Around 450 million people suffer from serious mental illnesses worldwide, which results in devastating personal outcomes and huge societal burden. Effective symptom monitoring and personalized interventions can significantly improve mental health care across different populations. However, traditional clinical methods often fall short when it comes to real-time monitoring of symptoms. Sensing technologies can address these issues by enabling granular tracking of behavioral, physiological, and social signals relevant to mental health. In this article, we describe how sensing technologies can be used to diagnose and monitor patient states for numerous serious mental illnesses. We also identify current limitations and potential future directions. We believe that the multimedia community can build on sensing technologies to enable efficient clinical decision-making in mental health care. Specifically, innovative multimedia systems can help identify and visualize personalized early-warning signs from complex multimodal signals, which could lead to effective intervention strategies and better preemptive care.",
"title": ""
},
{
"docid": "bdd44aeacddeefdfc2e3a5abf1088f2c",
"text": "Elevation data is an important component of geospatial database. This paper focuses on digital surface model (DSM) generation from high-resolution satellite imagery (HRSI). The HRSI systems, such as IKONOS and QuickBird have initialed a new era of Earth observation and digital mapping. The half-meter or better resolution imagery from Worldview-1 and the planned GeoEye-1 allows for accurate and reliable extraction and characterization of even more details of the earth surface. In this paper, the DSM is generated using an advanced image matching approach which integrates point and edge matching algorithms. This approach produces reliable, precise, and very dense 3D points for high quality digital surface models which also preserve discontinuities. Following the DSM generation, the accuracy of the DSM has been assessed and reported. To serve both as a reference surface and a basis for comparison, a lidar DSM has been employed in a testfield with differing terrain types and slope.",
"title": ""
}
] |
scidocsrr
|
826871d31b11a25bda1212406fbefe3b
|
Understanding and Fighting Bullying With Machine Learning
|
[
{
"docid": "8dfa68e87eee41dbef8e137b860e19cc",
"text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.",
"title": ""
}
] |
[
{
"docid": "83d42bb6ce4d4bf73f5ab551d0b78000",
"text": "An integrated 19-GHz Colpitts oscillator for a 77-GHz FMCW automotive radar frontend application is presented. The Colpitts oscillator has been realized in a fully differential circuit architecture. The VCO's 19 GHz output signal is buffered with an emitter follower stage and used as a LO signal source for a 77-GHz radar transceiver architecture. The LO frequency is quadrupled and amplified to drive the switching quad of a Gilbert-type mixer. As the quadrupler-mixer chip is required to describe the radar-sensor it is introduced, but the main focus of this paper aims the design of the sensor's LO source. In addition, the VCO-chip provides a divide-by-8 stage. The divider is either used for on-wafer measurements or later on in a PLL application.",
"title": ""
},
{
"docid": "b898a5e8d209cf8ed7d2b8bfae0e58e2",
"text": "Large datasets often have unreliable labels—such as those obtained from Amazon's Mechanical Turk or social media platforms—and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is underdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "ffdc919d6b9fb776c5e2db0b4a8cb3e6",
"text": "In suicidal asphyxia smothering is very rare, especially when caused by winding strips of adhesive tape around the head to cover the nose and mouth. The authors report a very unusual case in which the deceased, a 66-year-old man, was found with two strips of tape wound around his head: the first, more superficial tape was wrapped six times and the second was wrapped nine times. Only integration of the crime scene data with those of the autopsy and the patient's psychological profile enabled identification of the event as suicide.",
"title": ""
},
{
"docid": "867ddbd84e8544a5c2d6f747756ca3d9",
"text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.",
"title": ""
},
{
"docid": "e8eaeb8a2bb6fa71997aa97306bf1bb0",
"text": "Article history: Available online 18 February 2016",
"title": ""
},
{
"docid": "49ca032d3d62eae113fdaa81538151d1",
"text": "Wikipedia articles contain, besides free text, various types of structured information in the form of wiki markup. The type of wiki content that is most valuable for search are Wikipedia infoboxes, which display an article’s most relevant facts as a table of attribute-value pairs on the top right-hand side of the Wikipedia page. Infobox data is not used by Wikipedia’s own search engine. Standard Web search engines like Google or Yahoo also do not take advantage of the data. In this paper, we present Faceted Wikipedia Search, an alternative search interface for Wikipedia, which facilitates infobox data in order to enable users to ask complex questions against Wikipedia knowledge. By allowing users to query Wikipedia like a structured database, Faceted Wikipedia Search helps them to truly exploit Wikipedia’s collective intelligence.",
"title": ""
},
{
"docid": "3041c6026ea9e6bd0d7b80e99d925e31",
"text": "According to the cross-border e-commerce background, the article is analyzed its operation on the cross-border e-commerce logistics in china. Firstly, this paper illustrates the operation characteristics of cross-border e-commerce logistics, then analyzes some aspects of the cross-border e-commerce logistics, like operations, logistics cost management and so on. Secondly, this paper analyzes existing problems in cross-border e-commerce logistics from the development of electronic commerce logistics cross-border in China. Finally, some suggestions were put forward on cross-border e-commerce logistics operation from the two aspects of macro level of cross-border e-commerce and micro level of cross-border e-commerce enterprise.",
"title": ""
},
{
"docid": "a2688a1169babed7e35a52fa875505d4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "971a0e51042e949214fd75ab6203e36a",
"text": "This paper presents an automatic recognition method for color text characters extracted from scene images, which is robust to strong distortions, complex background, low resolution and non uniform lightning. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to recognize characters without making any assumptions, without applying any preprocessing or post-processing and without using tunable parameters. For this purpose, we use a training set of scene text images extracted from the ICDAR 2003 public training database. The proposed method is compared to recent character recognition techniques for scene images based on the ICDAR 2003 public samples dataset in order to contribute to the state-of-the-art method comparison efforts initiated in ICDAR 2003. Experimental results show an encouraging average recognition rate of 84.53%, ranging from 93.47% for clear images to 67.86% for seriously distorted images.",
"title": ""
},
{
"docid": "419c721c2d0a269c65fae59c1bdb273c",
"text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.",
"title": ""
},
{
"docid": "f7c37c4eadb6763e08b616f83d16ff70",
"text": "In retailer management, the Newsvendor problem has widely attracted attention as one of basic inventory models. In the traditional approach to solving this problem, it relies on the probability distribution of the demand. In theory, if the probability distribution is known, the problem can be considered as fully solved. However, in any real world scenario, it is almost impossible to even approximate or estimate a better probability distribution for the demand. In recent years, researchers start adopting machine learning approach to learn a demand prediction model by using other feature information. In this paper, we propose a supervised learning that optimizes the demand quantities for products based on feature information. We demonstrate that the original Newsvendor loss function as the training objective outperforms the recently suggested quadratic loss function. The new algorithm has been assessed on both the synthetic data and real-world data, demonstrating better performance.",
"title": ""
},
{
"docid": "9a4cf33f429bd376be787feaa2881610",
"text": "By adopting a cultural transformation in its employees' approach to work and using manufacturing based continuous quality improvement methods, the surgical pathology division of Henry Ford Hospital, Detroit, MI, focused on reducing commonly encountered defects and waste in processes throughout the testing cycle. At inception, the baseline in-process defect rate was measured at nearly 1 in 3 cases (27.9%). After the year-long efforts of 77 workers implementing more than 100 process improvements, the number of cases with defects was reduced by 55% to 1 in 8 cases (12.5%), with a statistically significant reduction in the overall distribution of defects (P = .0004). Comparison with defects encountered in the pre-improvement period showed statistically significant reductions in pre-analytic (P = .0007) and analytic (P = .0002) test phase processes in the post-improvement period that included specimen receipt, specimen accessioning, grossing, histology slides, and slide recuts. We share the key improvements implemented that were responsible for the overall success in reducing waste and re-work in the broad spectrum of surgical pathology processes.",
"title": ""
},
{
"docid": "d650d20b0179eabd24e5d8381e9d5cc2",
"text": "Despite the massive popularity of probabilistic (association) football forecasting models, and the relative simplicity of the outcome of such forecasts (they require only three probability values corresponding to home win, draw, and away win) there is no agreed scoring rule to determine their forecast accuracy. Moreover, the various scoring rules used for validation in previous studies are inadequate since they fail to recognise that football outcomes represent a ranked (ordinal) scale. This raises severe concerns about the validity of conclusions from previous studies. There is a well-established generic scoring rule, the Rank Probability Score (RPS), which has been missed by previous researchers, but which properly assesses football forecasting models.",
"title": ""
},
{
"docid": "f678cd2a6b9e99b992115709d48fae26",
"text": "This paper presents a literature survey on existing disparitymap algorithms. It focuses on fourmain stages of processing as proposed by Scharstein and Szeliski in a taxonomy and evaluation of dense two-frame stereo correspondence algorithms performed in 2002. To assist future researchers in developing their own stereomatching algorithms, a summary of the existing algorithms developed for every stage of processing is also provided.The survey also notes the implementation of previous software-based and hardware-based algorithms. Generally, the main processing module for a software-based implementation uses only a central processing unit. By contrast, a hardware-based implementation requires one or more additional processors for its processingmodule, such as graphical processing unit or a field programmable gate array. This literature survey also presents a method of qualitative measurement that is widely used by researchers in the area of stereo vision disparity mappings.",
"title": ""
},
{
"docid": "5058d6002c43298442ebdf2902e6adf3",
"text": "Non-contact image photoplethysmography has gained a lot of attention during the last 5 years. Starting with the work of Verkruysse et al. [1], various methods for estimation of the human pulse rate from video sequences of the face under ambient illumination have been presented. Applied on a mobile service robot aimed to motivate elderly users for physical exercises, the pulse rate can be a valuable information in order to adapt to the users conditions. For this paper, a typical processing pipeline was implemented on a mobile robot, and a detailed comparison of methods for face segmentation was conducted, which is the key factor for robust pulse rate extraction even, if the subject is moving. A benchmark data set is introduced focusing on the amount of motion of the head during the measurement.",
"title": ""
},
{
"docid": "f66c711f91b95cb1dfb6497da037f780",
"text": "Reynolds' theory of relational parametricity captures the invariance of polymorphically typed programs under change of data representation. Reynolds' original work exploited the typing discipline of the polymorphically typed lambda-calculus System F, but there is now considerable interest in extending relational parametricity to type systems that are richer and more expressive than that of System F.\n This paper constructs parametric models of predicative and impredicative dependent type theory. The significance of our models is twofold. Firstly, in the impredicative variant we are able to deduce the existence of initial algebras for all indexed=functors. To our knowledge, ours is the first account of parametricity for dependent types that is able to lift the useful deduction of the existence of initial algebras in parametric models of System F to the dependently typed setting. Secondly, our models offer conceptual clarity by uniformly expressing relational parametricity for dependent types in terms of reflexive graphs, which allows us to unify the interpretations of types and kinds, instead of taking the relational interpretation of types as a primitive notion. Expressing our model in terms of reflexive graphs ensures that it has canonical choices for the interpretations of the standard type constructors of dependent type theory, except for the interpretation of the universe of small types, where we formulate a refined interpretation tailored for relational parametricity. Moreover, our reflexive graph model opens the door to generalisations of relational parametricity, for example to higher-dimensional relational parametricity.",
"title": ""
},
{
"docid": "b039138e9c0ef8456084891c45d7b36d",
"text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.",
"title": ""
},
{
"docid": "6c2317957daf4f51354114de62f660a1",
"text": "This paper proposes a framework for recognizing complex human activities in videos. Our method describes human activities in a hierarchical discriminative model that operates at three semantic levels. At the lower level, body poses are encoded in a representative but discriminative pose dictionary. At the intermediate level, encoded poses span a space where simple human actions are composed. At the highest level, our model captures temporal and spatial compositions of actions into complex human activities. Our human activity classifier simultaneously models which body parts are relevant to the action of interest as well as their appearance and composition using a discriminative approach. By formulating model learning in a max-margin framework, our approach achieves powerful multi-class discrimination while providing useful annotations at the intermediate semantic level. We show how our hierarchical compositional model provides natural handling of occlusions. To evaluate the effectiveness of our proposed framework, we introduce a new dataset of composed human activities. We provide empirical evidence that our method achieves state-of-the-art activity classification performance on several benchmark datasets.",
"title": ""
},
{
"docid": "39b2c607c29c21d86b8d250886725ab3",
"text": "Central auditory processing disorder (CAPD) may be viewed as a multidimensional entity with far-reaching communicative, educational, and psychosocial implications for which differential diagnosis not only is possible but also is essential to an understanding of its impact and to the development of efficacious, deficit-specific management plans. This paper begins with a description of some behavioral central auditory assessment tools in current clinical use. Four case studies illustrate the utility of these tools in clarifying the nature of auditory difficulties. Appropriate treatment options that flow logically from the diagnoses are given in each case. The heterogeneity of the population presenting with auditory processing problems, not unexpected based on this model, is made clear, as is the clinical utility of central auditory tests in the transdisciplinary assessment and management of children's language and learning difficulties.",
"title": ""
},
{
"docid": "207c222a56e1a5fc14f9b78efc52d9a6",
"text": "Various research work have highlighted the importance of modeling learner's personality to provide a personalized computer based learning. In particular, questionnaire is the most used method to model personality which can be long and not motivating. This makes learners unwilling to take it. Therefore, this paper refers to Learning Analytics (LA) to implicitly model learners' personalities based on their traces generated during the learning-playing process. In this context, an LA system and an educational game were developed. Forty five participants (34 learners and 11 teachers) participated in an experiment to evaluate the accuracy level of the learners' modeling results and the teachers' satisfaction degree towards this LA system. The obtained results highlighted that the LA system has a high level of accuracy and a \"good\" agreement degree compared to the questionnaire paper. Besides, the teachers found the LA system easy to use, useful and they were willing to use it in the future.",
"title": ""
}
] |
scidocsrr
|
3faf13b9b70f724036214c4b854a3d90
|
Counting of People in the Extremely Dense Crowd using Genetic Algorithm and Blobs Counting
|
[
{
"docid": "9ffaf53e8745d1f7f5b7ff58c77602c6",
"text": "Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.",
"title": ""
}
] |
[
{
"docid": "b267bf90b86542e3032eaddcc2c3350f",
"text": "Many modalities of treatment for acquired skin hyperpigmentation are available including chemical agents or physical therapies, but none are completely satisfactory. Depigmenting compounds should act selectively on hyperactivated melanocytes, without short- or long-term side-effects, and induce a permanent removal of undesired pigment. Since 1961 hydroquinone, a tyrosinase inhibitor, has been introduced and its therapeutic efficacy demonstrated, and other whitening agents specifically acting on tyrosinase by different mechanisms have been proposed. Compounds with depigmenting activity are now numerous and the classification of molecules, based on their mechanism of action, has become difficult. Systematic studies to assess both the efficacy and the safety of such molecules are necessary. Moreover, the evidence that bleaching compounds are fairly ineffective on dermal accumulation of melanin has prompted investigations on the effectiveness of physical therapies, such as lasers. This review which describes the different approaches to obtain depigmentation, suggests a classification of whitening molecules on the basis of the mechanism by which they interfere with melanogenesis, and confirms the necessity to apply standardized protocols to evaluate depigmenting treatments.",
"title": ""
},
{
"docid": "c998a930d1c1eb4c3bfd53dfc752539b",
"text": "We propose a new method for semantic instance segmentation, by first computing how likely two pixels are to belong to the same object, and then by grouping similar pixels together. Our similarity metric is based on a deep, fully convolutional embedding model. Our grouping method is based on selecting all points that are sufficiently similar to a set of “seed points’, chosen from a deep, fully convolutional scoring model. We show competitive results on the Pascal VOC instance segmentation benchmark.",
"title": ""
},
{
"docid": "c237facfc6639dfff82659f927a25267",
"text": "The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.",
"title": ""
},
{
"docid": "15f75935c0a17f52790be930d656d171",
"text": "It is a well-known issue that attack primitives which exploit memory corruption vulnerabilities can abuse the ability of processes to automatically restart upon termination. For example, network services like FTP and HTTP servers are typically restarted in case a crash happens and this can be used to defeat Address Space Layout Randomization (ASLR). Furthermore, recently several techniques evolved that enable complete process memory scanning or code-reuse attacks against diversified and unknown binaries based on automated restarts of server applications. Until now, it is believed that client applications are immune against exploit primitives utilizing crashes. Due to their hard crash policy, such applications do not restart after memory corruption faults, making it impossible to touch memory more than once with wrong permissions. In this paper, we show that certain client application can actually survive crashes and are able to tolerate faults, which are normally critical and force program termination. To this end, we introduce a crash-resistance primitive and develop a novel memory scanning method with memory oracles without the need for control-flow hijacking. We show the practicability of our methods for 32-bit Internet Explorer 11 on Windows 8.1, and Mozilla Firefox 64-bit (Windows 8.1 and Linux 3.17.1). Furthermore, we demonstrate the advantages an attacker gains to overcome recent code-reuse defenses. Latest advances propose fine-grained re-randomization of the address space and code layout, or hide sensitive information such as code pointers to thwart tampering or misuse. We show that these defenses need improvements since crash-resistance weakens their security assumptions. To this end, we introduce the concept of CrashResistant Oriented Programming (CROP). We believe that our results and the implications of memory oracles will contribute to future research on defensive schemes against code-reuse attacks.",
"title": ""
},
{
"docid": "53d14e6dc9af930b5866b973731df5f5",
"text": "In recent years, malware has emerged as a critical security threat. In addition, malware authors continue to embed numerous anti-detection features to evade the existing malware detection approaches. Against this advanced class of malicious programs, dynamic behavior-based malware detection approaches outperform the traditional signature-based approaches by neutralizing the effects of obfuscation and morphing techniques. The majority of dynamic behavior detectors rely on system-calls to model the infection and propagation dynamics of malware. However, these approaches do not account an important anti-detection feature of modern malware, i.e., systemcall injection attack. This attack allows the malicious binaries to inject irrelevant and independent system-calls during the program execution thus modifying the execution sequences defeating the existing system-call-based detection. To address this problem, we propose an evasion-proof solution that is not vulnerable to system-call injection attacks. Our proposed approach characterizes program semantics using asymptotic equipartition property (AEP) mainly applied in information theoretic domain. The AEP allows us to extract information-rich call sequences that are further quantified to detect the malicious binaries. Furthermore, the proposed detection model is less vulnerable to call-injection attacks as the discriminating components are not directly visible to malware authors. We run a thorough set of experiments to evaluate our solution and compare it with the existing system-call-based malware detection techniques. The results demonstrate that the proposed solution is effective in identifying real malware instances.",
"title": ""
},
{
"docid": "72af2dae133773efb4ccdbf3cc227ff8",
"text": "This paper aims to propose a system design, working on the basis of the Internet of Things (IoT) LoRa, for tracking and monitoring the patient with mental disorder. The system consists of a LoRa client, which is a tracking device on end devices installed on the patient, and LoRa gateways, installed in hospitals and other public locations. The LoRa gateways are connected to local servers and cloud servers by utilizing both mobile cellular and Wi-Fi networks as the communications media. The feasibility of the system design is developed by employing the results of our previous work on LoRa performance in the Line of Sight (LoS) and Non-Line of Sight (Non-LoS) environments. Discussions are presented concerning the LoRa network performance, battery power and scalability. The future work is to build the proposed the design in a real system scenarios.",
"title": ""
},
{
"docid": "8f0e9f9a3e23e701eae4f3444d933301",
"text": "Reliability is a major concern for memories. To ensure that errors do not affect the data stored in a memory, error correction codes (ECCs) are widely used in memories. ECCs introduce an overhead as some bits are added to each word to detect and correct errors. This increases the cost of the memory. Content addressable memories (CAMs) are a special type of memories in which the input is compared with the data stored, and if a match is found, the output is the address of that word. CAMs are used in many computing and networking applications. In this brief, the specific features of CAMs are used to reduce the cost of implementing ECCs. More precisely, the proposed technique eliminates the need to store the ECC bits for each word in the memory. This is done by embedding those bits into the address of the key. The main potential issue of the new scheme is that it restricts the addresses in which a new key can be stored. Therefore, it can occur that a new key cannot be added into the CAM when there are addresses that are not used. This issue is analyzed and evaluated showing that, for large CAMs, it would only occur when the CAM occupancy is close to 100%. Therefore, the proposed scheme can be used to effectively reduce the cost of implementing ECCs in CAMs.",
"title": ""
},
{
"docid": "42b9f909251aeb850a1bfcdf7ec3ace4",
"text": "Kidney stones are one of the most common chronic disorders in industrialized countries. In patients with kidney stones, the goal of medical therapy is to prevent the formation of new kidney stones and to reduce growth of existing stones. The evaluation of the patient with kidney stones should identify dietary, environmental, and genetic factors that contribute to stone risk. Radiologic studies are required to identify the stone burden at the time of the initial evaluation and to follow up the patient over time to monitor success of the treatment program. For patients with a single stone an abbreviated laboratory evaluation to identify systemic disorders usually is sufficient. For patients with multiple kidney stones 24-hour urine chemistries need to be measured to identify abnormalities that predispose to kidney stones, which guides dietary and pharmacologic therapy to prevent future stone events.",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "1a9904a34d194d62cd70a61ff8751add",
"text": "Advertising endorser play a key role on information transmission between manufacturers and consumers. Its purpose is to draw consumers’ attention and interest in order to achieve the object of communication with consumers. This research is mainly in the discussion of the influences of endorser, brand image, brand equity, price promotion on purchase intention, and the results of the study are as follows:(1) brand equity has a significant influence on endorser, (2) brand image has a significant influence on endorser, (3) endorser have a significant influence on purchase intention, (4) price promotion has a significant influence on brand equity, (5) price promotion has a significant influence on purchase intention, (6) advertising endorser mediates the relationship between brand image and purchase intention, and (7) advertising endorser mediates the relationship between brand equity and the purchase intention.",
"title": ""
},
{
"docid": "df36496e721bf3f0a38791b6a4b99b2d",
"text": "Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite considerable external pressure and may ultimately inspire acts by individuals having no history of extremism, membership in a terrorist faction, or direct links to leadership. Examining longitudinal records of online activity, we uncovered an ecology evolving on a daily time scale that drives online support, and we provide a mathematical theory that describes it. The ecology features self-organized aggregates (ad hoc groups formed via linkage to a Facebook page or analog) that proliferate preceding the onset of recent real-world campaigns and adopt novel adaptive mechanisms to enhance their survival. One of the predictions is that development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones.",
"title": ""
},
{
"docid": "cc1959b1beeb8f5460c39c9d4f55d9e4",
"text": "The DBLP Computer Science Bibliography evolved from an early small experimental Web server to a popular service for the computer science community. Many design decisions and details of the public XML-records behind DBLP never were documented. This paper is a review of the evolution of DBLP. The main perspective is data modeling. In DBLP persons play a central role, our discussion of person names may be applicable to many other data bases. All DBLP data are available for your own experiments. You may either download the complete set, or use a simple XML-based API described in an online appendix.",
"title": ""
},
{
"docid": "9b2f4394cabd31008773049c32dea963",
"text": "Twenty-two decision tree, nine statistical, and two neural network algorithms are compared on thirty-two datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, spline-based, algorithm called POLYCLSSS at the top, although it is not statistically significantly different from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is QUEST with linear splits, which ranks fourth and fifth, respectively. Although spline-based statistical algorithms tend to have good accuracy, they also require relatively long training times. POLYCLASS, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The QUEST and logistic regression algorithms are substantially faster. Among decision tree algorithms with univariate splits, C4.5, IND-CART, and QUEST have the best combinations of error rate and speed. But C4.5 tends to produce trees with twice as many leaves as those from IND-CART and QUEST.",
"title": ""
},
{
"docid": "3b302983e03e37098399a2ca9d6a1cb1",
"text": "Eye movements can be used as alternative inputs for human-computer interface (HCI) systems such as virtual or augmented reality systems as well as new communication ways for patients with locked-in syndrome. In this study, we developed a real-time electrooculogram (EOG)-based eye-writing recognition system, with which users can write predefined symbolic patterns with their volitional eye movements. For the “eye-writing” recognition, the proposed system first reconstructs the eye-written traces from EOG waveforms in real-time; then, the system recognizes the intended symbolic inputs with a reliable recognition rate by matching the input traces with the trained eye-written traces of diverse input patterns. Experiments with 20 participants showed an average recognition rate of 87.38% (F1 score) for 29 different symbolic patterns (26 lower case alphabet characters and three functional input patterns representing Space, Backspace, and Enter keys), demonstrating the promise of our EOG-based eye-writing recognition system in practical scenarios.",
"title": ""
},
{
"docid": "1352c0dfb000c5f577ad0571d4c0601d",
"text": "Time and space are the basic building blocks of nature. As a unique existent in nature, our brain exists in time and takes up space. The brain's activity itself also constitutes and spreads in its own (intrinsic) time and space that is crucial for consciousness. Consciousness is a complex phenomenon including different dimensions: level/state, content/form, phenomenal aspects, and cognitive features. We propose a Temporo-spatial Theory of Consciousness (TTC) focusing primarily on the temporal and spatial features of the brain activity. We postulate four different neuronal mechanisms accounting for the different dimensions of consciousness: (i) \"temporo-spatial nestedness\" of the spontaneous activity accounts for the level/state of consciousness as neural predisposition of consciousness (NPC); (ii) \"temporo-spatial alignment\" of the pre-stimulus activity accounts for the content/form of consciousness as neural prerequisite of consciousness (preNCC); (iii) \"temporo-spatial expansion\" of early stimulus-induced activity accounts for phenomenal consciousness as neural correlates of consciousness (NCC); (iv) \"temporo-spatial globalization\" of late stimulus-induced activity accounts for the cognitive features of consciousness as neural consequence of consciousness (NCCcon).",
"title": ""
},
{
"docid": "a357ce62099cd5b12c09c688c5b9736e",
"text": "Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them.",
"title": ""
},
{
"docid": "f1e03d9f810409cd470ae65683553a0d",
"text": "Emergency departments (ED) face significant challenges in delivering high quality and timely patient care on an ever-present background of increasing patient numbers and limited hospital resources. A mismatch between patient demand and the ED's capacity to deliver care often leads to poor patient flow and departmental crowding. These are associated with reduction in the quality of the care delivered and poor patient outcomes. A literature review was performed to identify evidence-based strategies to reduce the amount of time patients spend in the ED in order to improve patient flow and reduce crowding in the ED. The use of doctor triage, rapid assessment, streaming and the co-location of a primary care clinician in the ED have all been shown to improve patient flow. In addition, when used effectively point of care testing has been shown to reduce patient time in the ED. Patient flow and departmental crowding can be improved by implementing new patterns of working and introducing new technologies such as point of care testing in the ED.",
"title": ""
},
{
"docid": "35e377e94b9b23283eabf141bde029a2",
"text": "We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.",
"title": ""
},
{
"docid": "9eb29fb373feaf664579e5b27db050a7",
"text": "A synthesis matrix is a table that summarizes various aspects of multiple documents. In our work, we specifically examine a problem of automatically generating a synthesis matrix for scientific literature review. As described in this paper, we first formulate the task as multidocument summarization and question-answering tasks given a set of aspects of the review based on an investigation of system summary tables of NLP tasks. Next, we present a method to address the former type of task. Our system consists of two steps: sentence ranking and sentence selection. In the sentence ranking step, the system ranks sentences in the input papers by regarding aspects as queries. We use LexRank and also incorporate query expansion and word embedding to compensate for tersely expressed queries. In the sentence selection step, the system selects sentences that remain in the final output. Specifically emphasizing the summarization type aspects, we regard this step as an integer linear programming problem with a special type of constraint imposed to make summaries comparable. We evaluated our system using a dataset we created from the ACL Anthology. The results of manual evaluation demonstrated that our selection method using comparability improved",
"title": ""
},
{
"docid": "c034cb6e72bc023a60b54d0f8316045a",
"text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s",
"title": ""
}
] |
scidocsrr
|
a73c26f261d2d2c4da84c3306f724015
|
Bayesian Based Approach Learning for Outcome Prediction of Soccer Matches
|
[
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
}
] |
[
{
"docid": "78d1a0f7a66d3533b1a00d865eeb6abd",
"text": "Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy and supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under -edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.",
"title": ""
},
{
"docid": "3106e93134e7000ab8cad6b9527b9360",
"text": "Traditional GIS tools and systems are powerful for analyzing geographic information for various applications but they are not designed for processing dynamic streams of data. This paper presents a CyberGIS framework that can automatically synthesize multi-sourced data, such as social media and socioeconomic data, to track disaster events, to produce maps, and to perform spatial and statistical analysis for disaster management. Within our framework, Apache Hive, Hadoop, and Mahout are used as scalable distributed storage, computing environment and machine learning library to store, process and mine massive social media data. The proposed framework is capable of supporting big data analytics of multiple sources. A prototype is implemented and tested using the 2011 Hurricane Sandy as a case study.",
"title": ""
},
{
"docid": "9520b99708d905d3713867fac14c3814",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "f63da8e7659e711bcb7a148ea12a11f2",
"text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.",
"title": ""
},
{
"docid": "0f45452e8c9ca8aaf501e7e89685746b",
"text": "Chatbots are programs that mimic human conversation using Artificial Intelligence (AI). It is designed to be the ultimate virtual assistant, entertainment purpose, helping one to complete tasks ranging from answering questions, getting driving directions, turning up the thermostat in smart home, to playing one's favorite tunes etc. Chatbot has become more popular in business groups right now as they can reduce customer service cost and handles multiple users at a time. But yet to accomplish many tasks there is need to make chatbots as efficient as possible. To address this problem, in this paper we provide the design of a chatbot, which provides an efficient and accurate answer for any query based on the dataset of FAQs using Artificial Intelligence Markup Language (AIML) and Latent Semantic Analysis (LSA). Template based and general questions like welcome/ greetings and general questions will be responded using AIML and other service based questions uses LSA to provide responses at any time that will serve user satisfaction. This chatbot can be used by any University to answer FAQs to curious students in an interactive fashion.",
"title": ""
},
{
"docid": "c3f7f9b70763c012698cad8295e50f2c",
"text": "Recommender systems are widely used in many areas, especially in e-commerce. Recently, they are also applied in e-learning tasks such as recommending resources (e.g. papers, books,..) to the learners (students). In this work, we propose a novel approach which uses recommender system techniques for educational data mining, especially for predicting student performance. To validate this approach, we compare recommender system techniques with traditional regression methods such as logistic/linear regression by using educational data for intelligent tutoring systems. Experimental results show that the proposed approach can improve prediction results.",
"title": ""
},
{
"docid": "79a20b9a059a2b4cc73120812c010495",
"text": "The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "0b813100014a91461898a8762c48b9cd",
"text": "In this paper we provide empirical evidence that the rating that an app attracts can be accurately predicted from the features it offers. Our results, based on an analysis of 11,537 apps from the Samsung Android and BlackBerry World app stores, indicate that the rating of 89% of these apps can be predicted with 100% accuracy. Our prediction model is built by using feature and rating information from the existing apps offered in the App Store and it yields highly accurate rating predictions, using only a few (11-12) existing apps for case-based prediction. These findings may have important implications for requirements engineering in app stores: They indicate that app developers may be able to obtain (very accurate) assessments of the customer reaction to their proposed feature sets (requirements), thereby providing new opportunities to support the requirements elicitation process for app developers.",
"title": ""
},
{
"docid": "925aacab817a20ff527afd4100c2a8bd",
"text": "This paper presents an efficient design approach for band-pass post filters in waveguides, based on mode-matching technique. With this technique, the characteristics of symmetrical cylindrical post arrangements in the cross-section of the considered waveguides can be analyzed accurately and quickly. Importantly, the approach is applicable to post filters in waveguide but can be extended to Substrate Integrated Waveguide (SIW) technologies. The fast computations provide accurate relationships for the K factors as a function of the post radii and the distances between posts, and allow analyzing the influence of machining tolerances on the filter performance. The computations are used to choose reasonable posts for designing band-pass filters, while the error analysis helps to judge whether a given machining precision is sufficient. The approach is applied to a Chebyshev band-pass post filter and a band-pass SIW filter with a center frequency of 10.5 GHz and a fractional bandwidth of 9.52% with verification via full-wave simulations using HFSS and measurements on manufactured prototypes.",
"title": ""
},
{
"docid": "6c0021aebabc2eae4ba31334443357a6",
"text": "The trend of pushing deep learning from cloud to edge due to concerns of latency, bandwidth, and privacy has created demand for low-energy deep convolutional neural networks (CNNs). The single-layer classifier in [1] achieves sub-nJ operation, but is limited to moderate accuracy on low-complexity tasks (90% on MNIST). Larger CNN chips provide dataflow computing for high-complexity tasks (AlexNet) at mJ energy [2], but edge deployment remains a challenge due to off-chip DRAM access energy. This paper describes a mixed-signal binary CNN processor that performs image classification of moderate complexity (86% on CIFAR-10) and employs near-memory computing to achieve a classification energy of 3.8μJ, a 40x improvement over TrueNorth [3]. We accomplish this using (1) the BinaryNet algorithm for CNNs with weights and activations constrained to +1/−1 [4], which drastically simplifies multiplications (XNOR) and allows integrating all memory on-chip; (2) an energy-efficient switched-capacitor (SC) neuron that addresses BinaryNet's challenge of wide vector summation; (3) architectural parallelism, parameter reuse, and locality.",
"title": ""
},
{
"docid": "556e737458015bf87047bb2f458fbd40",
"text": "Research in organizational learning has demonstrated processes and occasionally performance implications of acquisition of declarative (know-what) and procedural (know-how) knowledge. However, considerably less attention has been paid to learned characteristics of relationships that affect the decision to seek information from other people. Based on a review of the social network, information processing, and organizational learning literatures, along with the results of a previous qualitative study, we propose a formal model of information seeking in which the probability of seeking information from another person is a function of (1) knowing what that person knows; (2) valuing what that person knows; (3) being able to gain timely access to that person’s thinking; and (4) perceiving that seeking information from that person would not be too costly. We also hypothesize that the knowing, access, and cost variables mediate the relationship between physical proximity and information seeking. The model is tested using two separate research sites to provide replication. The results indicate strong support for the model and the mediation hypothesis (with the exception of the cost variable). Implications are drawn for the study of both transactive memory and organizational learning, as well as for management practice. (Information; Social Networks; Organizational Learning; Transactive Knowledge)",
"title": ""
},
{
"docid": "a1fe22519fbb1db450419b561b84fd91",
"text": "This paper proposes a novel approach for fast 3D reconstruction of an object inside a scene by using Inertial Measurement Unit (IMU) data. A network of cameras is used to observe the scene. For each camera within the network, a virtual camera is considered by using the concept of \\emph{infinite homography}. Such a virtual camera is downward and has optical axis parallel to the gravity vector. Then a set of virtual horizontal 3D planes are considered for the aim of 3D reconstruction. The intersection of these virtual parallel 3D planes with the object is computed using the concept of homography and by applying a 2D Bayesian occupancy grid for each plane. The experimental results validate both feasibility and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ba4fb2947987c87a5103616d4bc138de",
"text": "In intelligent tutoring systems with natural language dialogue, speech act classification, the task of detecting learners’ intentions, informs the system’s response mechanism. In this paper, we propose supervised machine learning models for speech act classification in the context of an online collaborative learning game environment. We explore the role of context (i.e. speech acts of previous utterances) for speech act classification. We compare speech act classification models trained and tested with contextual and non-contextual features (contents of the current utterance). The accuracy of the proposed models is high. A surprising finding is the modest role of context in automatically predicting the speech acts.",
"title": ""
},
{
"docid": "267445f1079566f74a05bc13d7cad1c1",
"text": "When Procter & Gamble’s CEO Bob McDonald set a strategic goal announcing “We want to be the first company that digitizes from end to end” he turned to CIO and head of Global Business Services Filippo Passerini to lead the transformation. Many CIOs tell us their jobs are expanding and many CEOs tell us they would like their CIOs to do more. In addition to providing highquality and cost-effective IT services, today’s CIO often has other, and growing, responsibilities. These include helping with revenue generation, delivering shared services, optimizing enterprise business processes, improving the customer experience, overseeing business operations and digitizing the entire firm. We think these new responsibilities, and the pressures they place on CIOs, are a symptom of one of the biggest opportunities and challenges enterprises face today—the ever-increasing digitization of business as part of the move toward a more digital economy.3 Every interaction between a customer and a business, between a business and another business, between",
"title": ""
},
{
"docid": "0b3ed0ce26999cb6188fb0c88eb483ab",
"text": "We consider the problem of learning causal networks with int erventions, when each intervention is limited in size under Pearl’s Structural Equation Model with independent e rrors (SEM-IE). The objective is to minimize the number of experiments to discover the causal directions of all the e dges in a causal graph. Previous work has focused on the use of separating systems for complete graphs for this task. We prove that any deterministic adaptive algorithm needs to be a separating system in order to learn complete graphs in t e worst case. In addition, we present a novel separating system construction, whose size is close to optimal and is ar guably simpler than previous work in combinatorics. We also develop a novel information theoretic lower bound on th e number of interventions that applies in full generality, including for randomized adaptive learning algorithms. For general chordal graphs, we derive worst case lower bound s o the number of interventions. Building on observations about induced trees, we give a new determinist ic adaptive algorithm to learn directions on any chordal skeleton completely. In the worst case, our achievable sche me is anα-approximation algorithm where α is the independence number of the graph. We also show that there exi st graph classes for which the sufficient number of experiments is close to the lower bound. In the other extreme , there are graph classes for which the required number of experiments is multiplicativelyα away from our lower bound. In simulations, our algorithm almost always performs very c lose to the lower bound, while the approach based on separating systems for complete graphs is significantly wor se for random chordal graphs.",
"title": ""
},
{
"docid": "df69a701bca12d3163857a9932ef51e2",
"text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.",
"title": ""
},
{
"docid": "3a27da34a0b2534d121f44bc34085c52",
"text": "In recent years both practitioners and academics have shown an increasing interest in the assessment of marketing -performance. This paper explores the metrics that firms select and some reasons for those choices. Our data are drawn from two UK studies. The first reports practitioner usage by the main metrics categories (consumer behaviour and intermediate, trade customer, competitor, accounting and innovativeness). The second considers which individual metrics are seen as the most important and whether that differs by sector. The role of brand equity in performance assessment and top",
"title": ""
},
{
"docid": "f31dddb905b4e3fbf20a54bdba48ca36",
"text": "Word similarity computation is a fundamental task for natural language processing. We organize a semantic campaign of Chinese word similarity measurement at NLPCC-ICCPOL 2016. This task provides a benchmark dataset of Chinese word similarity (PKU-500 dataset), including 500 word pairs with their similarity scores. There are 21 teams submitting 24 systems in this campaign. In this paper, we describe clearly the data preparation and word similarity annotation, make an in-depth analysis on the evaluation results and give a brief introduction to participating systems.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] |
scidocsrr
|
e5adee5fe8330e738f334942f811177b
|
Joint Sparse Representation and Robust Feature-Level Fusion for Multi-Cue Visual Tracking
|
[
{
"docid": "3a2456fce98db50aee2d342ef838b349",
"text": "There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.",
"title": ""
}
] |
[
{
"docid": "77b796ab3536541b3f2a20512809a058",
"text": "We have measured the bulk optical properties of healthy female breast tissues in vivo in the parallel plate, transmission geometry. Fifty-two volunteers were measured. Blood volume and blood oxygen saturation were derived from the optical property data using a novel method that employed a priori spectral information to overcome limitations associated with simple homogeneous tissue models. The measurements provide an estimate of the variation of normal breast tissue optical properties in a fairly large population. The mean blood volume was 34 +/- 9 microM and the mean blood oxygen saturation was 68 +/- 8%. We also investigated the correlation of these optical properties with demographic factors such as body mass index (BMI) and age. We observed a weak correlation of blood volume and reduced scattering coefficient with BMI: correlation with age, however, was not evident within the statistical error of these experiments. The new information on healthy breast tissue provides insight about the potential contrasts available for diffuse optical tomography of breast tumours.",
"title": ""
},
{
"docid": "b1d8f5309972b5fe116e491cc738a2a5",
"text": "An important approach for describing a region is to quantify its structure content. In this paper the use of functions for computing texture based on statistical measures is prescribed. MPM (Maximizer of the posterior margins) algorithm is employed. The segmentation based on texture feature would classify the breast tissue under various categories. The algorithm evaluates the region properties of the mammogram image and thereby would classify the image into important segments. Images from mini-MIAS data base (Mammogram Image Analysis Society database (UK)) have been considered to conduct our experiments. The segmentation thus obtained is comparatively better than the other normal methods. The validation of the work has been done by visual inspection of the segmented image by an expert radiologist. This is our basic step for developing a computer aided detection (CAD) system for early detection of breast cancer.",
"title": ""
},
{
"docid": "d407b75f7ee6c3f0d504bddf39c2648e",
"text": "This article presents a recent and inclusive review of the use of token economies in various environments (schools, home, etc.). Digital and manual searches were carried using the following databases: Google Scholar, Psych Info (EBSCO), and The Web of Knowledge. The search terms included: token economy, token systems, token reinforcement, behavior modification, classroom management, operant conditioning, animal behavior, token literature reviews, and token economy concerns. The criteria for inclusion were studies that implemented token economies in settings where academics were assessed. Token economies have been extensively implemented and evaluated in the past. Few articles in the peerreviewed literature were found being published recently. While token economy reviews have occurred historically (Kazdin, 1972, 1977, 1982), there has been no recent overview of the research. During the previous several years, token economies in relation to certain disorders have been analyzed and reviewed; however, a recent review of token economies as a field of study has not been carried out. The purpose of this literature review was to produce a recent review and evaluation on the research of token economies across settings.",
"title": ""
},
{
"docid": "7113e007073184671d0bf5c9bdda1f5c",
"text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cf609c174c70295ef57995f662ceda50",
"text": "Upper limb exercise is often neglected during post-stroke rehabilitation. Video games have been shown to be useful in providing environments in which patients can practise repetitive, functionally meaningful movements, and in inducing neuroplasticity. The design of video games is often focused upon a number of fundamental principles, such as reward, goals, challenge and the concept of meaningful play, and these same principles are important in the design of games for rehabilitation. Further to this, there have been several attempts for the strengthening of the relationship between commercial game design and rehabilitative game design, the former providing insight into factors that can increase motivation and engagement with the latter. In this article, we present an overview of various game design principles and the theoretical grounding behind their presence, in addition to attempts made to utilise these principles in the creation of upper limb stroke rehabilitation systems and the outcomes of their use. We also present research aiming to move the collaborative efforts of designers and therapists towards a model for the structured design of these games and the various steps taken concerning the theoretical classification and mapping of game design concepts with intended cognitive and motor outcomes.",
"title": ""
},
{
"docid": "068df0bef276b121d859b0d1c114acce",
"text": "Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.",
"title": ""
},
{
"docid": "1bd06d0d120b28f5d0720643fcdb9944",
"text": "Indoor positioning system based on Receive Signal Strength Indication (RSSI) from Wireless access equipment have become very popular in recent years. This system is very useful in many applications such as tracking service for older people, mobile robot localization and so on. While Outdoor environment using Global Navigation Satellite System (GNSS) and cellular [14] network works well and widespread for navigator. However, there was a problem with signal propagation from satellites. They cannot be used effectively inside the building areas until a urban environment. In this paper we propose the Wi-Fi Fingerprint Technique using Fuzzy set theory to adaptive Basic K-Nearest Neighbor algorithm to classify the labels of a database system. It was able to improve the accuracy and robustness. The performance of our simple algorithm is evaluated by the experimental results which show that our proposed scheme can achieve a certain level of positioning system accuracy.",
"title": ""
},
{
"docid": "2899b31339acbd774aff53fc99590a45",
"text": "An ultra-wideband patch antenna is presented for K-band communication. The antenna is designed by employing stacked geometry and aperture-coupled technique. The rectangular patch shape and coaxial fed configuration is used for particular design. The ultra-wideband characteristics are achieved by applying a specific surface resistance of 75Ω/square to the upper rectangular patch and it is excited through a rectangular slot made on the lower patch element (made of copper). The proposed patch antenna is able to operate in the frequency range of 12-27.3 GHz which is used in radar and satellite communication, commonly named as K-band. By employing a technique of thicker substrate and by applying a specific surface resistance to the upper patch element, an impedance bandwidth of 77.8% is achieved having VSWR ≤ 2. It is noted that the gain of proposed antenna is linearly increased in the frequency range of 12-26 GHz and after that the gain is decreased up to 6 dBi. Simulation results are presented to demonstrate the performance of proposed ultra-wideband microstrip patch antenna.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "fdebcc3ec36a61186b893773eedbd529",
"text": "OBJECTIVE\nClinical observations of the flexion synergy in individuals with chronic hemiparetic stroke describe coupling of shoulder, elbow, wrist, and finger joints. Yet, experimental quantification of the synergy within a shoulder abduction (SABD) loading paradigm has focused only on shoulder and elbow joints. The paretic wrist and fingers have typically been studied in isolation. Therefore, this study quantified involuntary behavior of paretic wrist and fingers during concurrent activation of shoulder and elbow.\n\n\nMETHODS\nEight individuals with chronic moderate-to-severe hemiparesis and four controls participated. Isometric wrist/finger and thumb flexion forces and wrist/finger flexor and extensor electromyograms (EMG) were measured at two positions when lifting the arm: in front of the torso and at maximal reaching distance. The task was completed in the ACT(3D) robotic device with six SABD loads by paretic, non-paretic, and control limbs.\n\n\nRESULTS\nConsiderable forces and EMG were generated during lifting of the paretic arm only, and they progressively increased with SABD load. Additionally, the forces were greater at the maximal reach position than at the position front of the torso.\n\n\nCONCLUSIONS\nFlexion of paretic wrist and fingers is involuntarily coupled with certain shoulder and elbow movements.\n\n\nSIGNIFICANCE\nActivation of the proximal upper limb must be considered when seeking to understand, rehabilitate, or develop devices to assist the paretic hand.",
"title": ""
},
{
"docid": "8e5f2b976dfe8883e419fdc49bf53c78",
"text": "This paper studies the object transfiguration problem in wild images. The generative network in classical GANs for object transfiguration often undertakes a dual responsibility: to detect the objects of interests and to convert the object from source domain to target domain. In contrast, we decompose the generative network into two separat networks, each of which is only dedicated to one particular sub-task. The attention network predicts spatial attention maps of images, and the transformation network focuses on translating objects. Attention maps produced by attention network are encouraged to be sparse, so that major attention can be paid to objects of interests. No matter before or after object transfiguration, attention maps should remain constant. In addition, learning attention network can receive more instructions, given the available segmentation annotations of images. Experimental results demonstrate the necessity of investigating attention in object transfiguration, and that the proposed algorithm can learn accurate attention to improve quality of generated images.",
"title": ""
},
{
"docid": "543b79408c3b66476efc66f3a29d1fb0",
"text": "Because of polysemy, distant labeling for information extraction leads to noisy training data. We describe a procedure for reducing this noise by using label propagation on a graph in which the nodes are entity mentions, and mentions are coupled when they occur in coordinate list structures. We show that this labeling approach leads to good performance even when off-the-shelf classifiers are used on the distantly-labeled data.",
"title": ""
},
{
"docid": "76d029c669e84e420c8513bd837fb59b",
"text": "Since its original publication, the Semi-Global Matching (SGM) technique has been re-implemented by many researchers and companies. The method offers a very good trade off between runtime and accuracy, especially at object borders and fine structures. It is also robust against radiometric differences and not sensitive to the choice of parameters. Therefore, it is well suited for solving practical problems. The applications reach from remote sensing, like deriving digital surface models from aerial and satellite images, to robotics and driver assistance systems. This paper motivates and explains the method, shows current developments as well as examples from various applications.",
"title": ""
},
{
"docid": "994922edc3eb0527bba2f70e9b31870c",
"text": "A large body of literature explains the inferior position of unskilled workers by imposing a structural shift in the labor force skill composition. This paper takes a different approach by emphasizing the connection between cyclical variations in skilled and unskilled labor markets. Using a stylized business cycle model with search frictions in the respective sub-markets, I find that imperfect substitution between skilled and unskilled labor creates a channel for the variations in the sub-markets. Together with a general labor augmenting technology shock, it can generate downward sloping Beveridge curves. Calibrating the model to US data yields higher volatilities in the unskilled labor markets and reproduces stylized business cycle facts.",
"title": ""
},
{
"docid": "cf015ef9181bf2fcf39eb41f7fa9196e",
"text": "Channel estimation is useful in millimeter wave (mmWave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics such as mutual information or signal-to-interference-noise (SINR) ratio. At mmWave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of prior work on channel estimation for hybrid architectures assumes a frequencyflat mmWave channel model. In this paper, we consider a frequency-selective mmWave channel and propose compressed-sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose trade-offs in complexity-overheadperformance as compared to those of previous approaches. This work was partially funded by the Agencia Estatal de Investigacin (Spain) and the European Regional Development Fund (ERDF) under project MYRADA (TEC2016-75103-C2-2-R), the U.S. Department of Transportation through the DataSupported Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center, by the Texas Department of Transportation under Project 0-6877 entitled Communications and Radar-Supported Transportation Operations and Planning (CAR-STOP) and by the National Science Foundation under Grant NSF-CCF-1319556 and NSF-CCF-1527079. ar X iv :1 70 4. 08 57 2v 1 [ cs .I T ] 2 7 A pr 2 01 7",
"title": ""
},
{
"docid": "854b473b0ee6d3cf4d1a34cd79a658e3",
"text": "Blockchain provides a new approach for participants to maintain reliable databases in untrusted networks without centralized authorities. However, there are still many serious problems in real blockchain systems in IP network such as the lack of support for multicast and the hierarchies of status. In this paper, we design a bitcoin-like blockchain system named BlockNDN over Named Data Networking and we implement and deploy it on our cluster as well. The resulting design solves those problems in IP network. It provides completely decentralized systems and simplifies system architecture. It also improves the weak-connectivity phenomenon and decreases the broadcast overhead.",
"title": ""
},
{
"docid": "244dd6e8f6c4d8d9180ee0509e14ce5b",
"text": "The adoption of hashtags in major social networks including Twitter, Facebook, and Google+ is a strong evidence of its importance in facilitating information diffusion and social chatting. To understand the factors (e.g., user interest, posting time and tweet content) that may affect hashtag annotation in Twitter and to capture the implicit relations between latent topics in tweets and their corresponding hashtags, we propose two PLSA-style topic models to model the hashtag annotation behavior in Twitter. Content-Pivoted Model (CPM) assumes that tweet content guides the generation of hashtags while Hashtag-Pivoted Model (HPM) assumes that hashtags guide the generation of tweet content. Both models jointly incorporate user, time, hashtag and tweet content in a probabilistic framework. The PLSA-style models also enable us to verify the impact of social factor on hashtag annotation by introducing social network regularization in the two models. We evaluate the proposed models using perplexity and demonstrate their effectiveness in two applications: retrospective hashtag annotation and related hashtag discovery. Our results show that HPM outperforms CPM by perplexity and both user and time are important factors that affect model performance. In addition, incorporating social network regularization does not improve model performance. Our experimental results also demonstrate the effectiveness of our models in both applications compared with baseline methods.",
"title": ""
},
{
"docid": "edf78a6b10d018a476e79dd34df1fef1",
"text": "STATEMENT OF THE PROBLEM\nResin bonding is essential for clinical longevity of indirect restorations. Especially in light of the increasing popularity of computer-aided design/computer-aided manufacturing-fabricated indirect restorations, there is a need to assess optimal bonding protocols for new ceramic/polymer materials and indirect composites.\n\n\nPURPOSE OF THE STUDY\nThe aim of this article was to review and assess the current scientific evidence on the resin bond to indirect composite and new ceramic/polymer materials.\n\n\nMATERIALS AND METHODS\nAn electronic PubMed database search was conducted from 1966 to September 2013 for in vitro studies pertaining the resin bond to indirect composite and new ceramic/polymer materials.\n\n\nRESULTS\nThe search revealed 198 titles. Full-text screening was carried out for 43 studies, yielding 18 relevant articles that complied with inclusion criteria. No relevant studies could be identified regarding new ceramic/polymer materials. Most common surface treatments are aluminum-oxide air-abrasion, silane treatment, and hydrofluoric acid-etching for indirect composite restoration. Self-adhesive cements achieve lower bond strengths in comparison with etch-and-rinse systems. Thermocycling has a greater impact on bonding behavior than water storage.\n\n\nCONCLUSIONS\nAir-particle abrasion and additional silane treatment should be applied to enhance the resin bond to laboratory-processed composites. However, there is an urgent need for in vitro studies that evaluate the bond strength to new ceramic/polymer materials.\n\n\nCLINICAL SIGNIFICANCE\nThis article reviews the available dental literature on resin bond of laboratory composites and gives scientifically based guidance for their successful placement. Furthermore, this review demonstrated that future research for new ceramic/polymer materials is required.",
"title": ""
},
{
"docid": "ccc3cf21c4c97f9c56915b4d1e804966",
"text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.",
"title": ""
}
] |
scidocsrr
|
82cd07fa7d69190a870cfb47d6cc9671
|
Ropossum: An Authoring Tool for Designing, Optimizing and Solving Cut the Rope Levels
|
[
{
"docid": "7d90646ca1b2b8f96fd808ef6f544b09",
"text": "Tanagra is a mixed-initiative tool for level design, allowing a human and a computer to work together to produce a level for a 2-D platformer. An underlying, reactive level generator ensures that all levels created in the environment are playable, and provides the ability for a human designer to rapidly view many different levels that meet their specifications. The human designer can iteratively refine the level by placing and moving level geometry, as well as through directly manipulating the pacing of the level. This paper presents the design environment, its underlying architecture that integrates reactive planning and numerical constraint solving, and an evaluation of Tanagra's expressive range.",
"title": ""
}
] |
[
{
"docid": "d72f7b99293770eed2764a76c5ee6651",
"text": "The successful motor rehabilitation of stroke, traumatic brain/spinal cord/sport injured patients requires a highly intensive and task-specific therapy based approach. Significant budget, time and logistic constraints limits a direct hand-to-hand therapy approach, so that intelligent assistive machines may offer a solution to promote motor recovery and obtain a better understanding of human motor control. This paper will address the development of a lower limb exoskeleton legs for force augmentation and active assistive walking training. The twin wearable legs are powered by pneumatic muscle actuators (pMAs), an experimental low mass high power to weight and volume actuation system. In addition, the pMA being pneumatic produces a more natural muscle like contact and as such can be considered a soft and biomimetic actuation system. This capacity to \"replicate\" the function of natural muscle and inherent safety is extremely important when working in close proximity to humans. The integration of the components sections and testing of the performance will also be considered to show how the structure and actuators can be combined to produce the various systems needed for a highly flexible/low weight clinically viable rehabilitation exoskeleton",
"title": ""
},
{
"docid": "30dffba83b24e835a083774aa91e6c59",
"text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.",
"title": ""
},
{
"docid": "f086fef6b9026a67e73cd6f892aa1c37",
"text": "Shoulder girdle movement is critical for stabilizing and orientating the arm during daily activities. During robotic arm rehabilitation with stroke patients, the robot must assist movements of the shoulder girdle. Shoulder girdle movement is characterized by a highly nonlinear function of the humeral orientation, which is different for each person. Hence it is improper to use pre-calculated shoulder girdle movement. If an exoskeleton robot cannot mimic the patient's shoulder girdle movement well, the robot axes will not coincide with the patient's, which brings reduced range of motion (ROM) and discomfort to the patients. A number of exoskeleton robots have been developed to assist shoulder girdle movement. The shoulder mechanism of these robots, along with the advantages and disadvantages, are introduced. In this paper, a novel shoulder mechanism design of exoskeleton robot is proposed, which can fully mimic the patient's shoulder girdle movement in real time.",
"title": ""
},
{
"docid": "e9d987351816570b29d0144a6a7bd2ae",
"text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.",
"title": ""
},
{
"docid": "2af4728858b2baa29b13b613f902f644",
"text": "Money has been said to change people's motivation (mainly for the better) and their behavior toward others (mainly for the worse). The results of nine experiments suggest that money brings about a self-sufficient orientation in which people prefer to be free of dependency and dependents. Reminders of money, relative to nonmoney reminders, led to reduced requests for help and reduced helpfulness toward others. Relative to participants primed with neutral concepts, participants primed with money preferred to play alone, work alone, and put more physical distance between themselves and a new acquaintance.",
"title": ""
},
{
"docid": "d29b7f2808cb7abb2a2e49462b9b3039",
"text": "A novel low profile circularly polarized antenna using Substrate Integrated Waveguide technology (SIW) for millimeter-wave (MMW) application is proposed. The antenna employs an X-shaped slot excited by a rectangular SIW and backed by circular cavity. The optimized design has an operating frequency range from 34.7 GHZ to 36.1 GHz with a bandwidth of 4.23%. The overall antenna realized gain is around 6.7 dB over the operating band. The simulated results using both HFSS and CSTMWS show a very good agreement between them.",
"title": ""
},
{
"docid": "f7d30db4b04b33676d386953aebf503c",
"text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.",
"title": ""
},
{
"docid": "a6e84af8b1ba1d120e69c10f76eb7e2a",
"text": "Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoencoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the intractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.",
"title": ""
},
{
"docid": "e56accce9d4ae911e85f5fd2b92a614a",
"text": "This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.",
"title": ""
},
{
"docid": "3d34dc15fa11e723a52b21dc209a939f",
"text": "Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.",
"title": ""
},
{
"docid": "86f273bc450b9a3b6acee0e8d183b3cd",
"text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.",
"title": ""
},
{
"docid": "d9f2abb9735b449b622f94e5af346364",
"text": "Abstract—The goal of this paper is to present an addressing scheme that allows for assigning a unique IPv6 address to each node in the Internet of Things (IoT) network. This scheme guarantees uniqueness by extracting the clock skew of each communication device and converting it into an IPv6 address. Simulation analysis confirms that the presented scheme provides reductions in terms of energy consumption, communication overhead and response time as compared to four studied addressing schemes Strong DAD, LEADS, SIPA and CLOSA.",
"title": ""
},
{
"docid": "c7e3fc9562a02818bba80d250241511d",
"text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.",
"title": ""
},
{
"docid": "6999337eabb058cd0c3057366359538b",
"text": "Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.",
"title": ""
},
{
"docid": "723d1a0cd7a65d0ac164c2749d481884",
"text": "...................................................................................................................v 1 Purpose of the Research and Development Effort........................................1 2 Defining Interoperability .................................................................................3 3 Models of Interoperability ...............................................................................5 3.1 Levels of Information System Interoperability ............................................5 3.2 Organizational Interoperability Maturity Model ...........................................6 3.3 NATO C3 Technical Architecture (NC3TA) Reference Model for Interoperability...........................................................................................7 3.4 Levels of Conceptual Interoperability (LCIM) Model...................................8 3.5 Layers of Coalition Interoperability.............................................................9 3.6 The System of Systems Interoperability (SOSI) Model ..............................9 4 Approach........................................................................................................13 4.1 Method ....................................................................................................13 4.2 Collaborators...........................................................................................14 5 Results: Current State ...................................................................................15 5.1 Observations on the SOSI Model ............................................................15 6 DoD Interoperability Initiatives .....................................................................17 6.1 Commands, Directorates and Centers.....................................................17 6.2 Standards ................................................................................................20 6.3 Strategies ................................................................................................20 6.4 Demonstrations, Exercises and Testbeds ................................................21 6.5 Joint and Coalition Force Integration Initiatives........................................22 6.6 DoD-Sponsored Research.......................................................................25 6.7 Other Initiatives .......................................................................................26 7 Interview and Workshop Findings................................................................27 7.1 General Themes......................................................................................27",
"title": ""
},
{
"docid": "435a764aaf6bdd39a3d40771bc1f111e",
"text": "Wikipedia, the popular online encyclopedia, has in just six years grown from an adjunct to the now-defunct Nupedia to over 31 million pages and 429 million revisions in 256 languages and spawned sister projects such as Wiktionary and Wikisource. Available under the GNU Free Documentation License, it is an extraordinarily large corpus with broad scope and constant updates. Its articles are largely consistent in structure and organized into category hierarchies. However, the wiki method of collaborative editing creates challenges that must be addressed. Wikipedia’s accuracy is frequently questioned, and systemic bias means that quality and coverage are uneven, while even the variety of English dialects juxtaposed can sabotage the unwary with differences in semantics, diction and spelling. This paper examines Wikipedia from a research perspective, providing basic background knowledge and an understanding of its strengths and weaknesses. We also solve a technical challenge posed by the enormity of text (1.04TB for the English version) made available with a simple, easily-implemented dictionary compression algorithm that permits time-efficient random access to the data with a twenty-eight-fold reduction in size.",
"title": ""
},
{
"docid": "65dcbdfd4da022b9badf9e604edfa188",
"text": "Anomaly detection is the process of identifying unusual signals in a set of observations. This is a vital task in a variety of fields including cybersecurity and the battlefield. In many scenarios, observations are gathered from a set of distributed mobile or small form factor devices. Traditionally, the observations are sent to centralized servers where large-scale systems perform analytics on the data gathered from all devices. However, the compute capability of these small form factor devices is ever increasing with annual improvements to hardware. A new model, known as edge computing, takes advantage of this compute capability and performs local analytics on the distributed devices. This paper presents an approach to anomaly detection that uses autoencoders, specialized deep learning neural networks, deployed on each edge device, to perform analytics and identify anomalous observations in a distributed fashion. Simultaneously, the autoencoders learn from the new observations in order to identify new trends. A centralized server aggregates the updated models and distributes them back to the edge devices when a connection is available. This architecture reduces the bandwidth and connectivity requirements between the edge devices and the central server as only the autoencoder model and anomalous observations must be sent to the central servers, rather than all observation data.",
"title": ""
},
{
"docid": "521fc9a51ba2d6ef29e80acf45d27a7d",
"text": "Exergames are video games that use exertion-based interfaces to promote physical activity, fitness, and gross motor skill development. The purpose of this paper is to describe the development of an organizing framework based on principles of learning theory to classify and rank exergames according to embedded behavior change principles. Behavioral contingencies represent a key theory-based game design principle that can be objectively measured, evaluated, and manipulated to help explain and change the frequency and duration of game play. Case examples are presented that demonstrate how to code dimensions of behavior, consequences of behavior, and antecedents of behavior. Our framework may be used to identify game principles which, in the future, might be used to predict which games are most likely to promote adoption and maintenance of leisure time physical activity.",
"title": ""
},
{
"docid": "a8c72cc359a44574f48869babf258a23",
"text": "Near-infrared spectroscopy (NIRS) can be used to noninvasively measure changes in the concentrations of oxy- and deoxyhemoglobin in tissue. We have previously shown that while global changes can be reliably measured, focal changes can produce erroneous estimates of concentration changes (NeuroImage 13 (2001), 76). Here, we describe four separate sources for systematic error in the calculation of focal hemoglobin changes from NIRS data and use experimental methods and Monte Carlo simulations to examine the importance and mitigation methods of each. The sources of error are: (1). the absolute magnitudes and relative differences in pathlength factors as a function of wavelength, (2). the location and spatial extent of the absorption change with respect to the optical probe, (3). possible differences in the spatial distribution of hemoglobin species, and (4). the potential for simultaneous monitoring of multiple regions of activation. We found wavelength selection and optode placement to be important variables in minimizing such errors, and our findings indicate that appropriate experimental procedures could reduce each of these errors to a small fraction (<10%) of the observed concentration changes.",
"title": ""
},
{
"docid": "d1662ef8103d5513268a604253de122a",
"text": "Highly-interconnected networks of nonlinear analog neurons are shown to be extremely effective in computing. The networks can rapidly provide a collectively-computed solution (a digital output) to a problem on the basis of analog input information. The problems to be solved must be formulated in terms of desired optima, often subject to constraints. The general principles involved in constructing networks to solve specific problems are discussed. Results of computer simulations of a network designed to solve a difficult but well-defined optimization problem-the Traveling-Salesman Problem-are presented and used to illustrate the computational power of the networks. Good solutions to this problem are collectively computed within an elapsed time of only a few neural time constants. The effectiveness of the computation involves both the nonlinear analog response of the neurons and the large connectivity among them. Dedicated networks of biological or microelectronic neurons could provide the computational capabilities described for a wide class of problems having combinatorial complexity. The power and speed naturally displayed by such collective networks may contribute to the effectiveness of biological information processing.",
"title": ""
}
] |
scidocsrr
|
b386e2e1d540031e3e05dd50cbd89dc4
|
Subpixel Photometric Stereo
|
[
{
"docid": "7e74cc21787c1e21fd64a38f1376c6a9",
"text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.",
"title": ""
},
{
"docid": "84a187b1e5331c4e7eb349c8b1358f14",
"text": "We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.",
"title": ""
}
] |
[
{
"docid": "3d4cfb2d3ba1e70e5dd03060f5d5f663",
"text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.",
"title": ""
},
{
"docid": "323f7fd7269d020ebc60af1917e90cb4",
"text": "This paper describes the design concept, operating principle, analytical design, fabrication of a functional prototype, and experimental performance verification of a novel wobble motor with a XY compliant mechanism driven by shape memory alloy (SMA) wires. With the aim of realizing an SMA based motor which could generate bidirectional high-torque motion, the proposed motor is devised with wobble motor driving principle widely utilized for speed reducers. As a key mechanism which functions to guide wobbling motion, a planar XY compliant mechanism is designed and applied to the motor. Since the mechanism has monolithic flat structure with the planar mirror symmetric configuration, cyclic expansion and contraction of the SMA wires could be reliably converted into high-torque rotary motion. For systematic design of the motor, a characterization of electro-thermomechanical behavior of the SMA wire is experimentally carried out, and the design parametric analysis is conducted to determine parametric values of the XY compliant mechanism. The designed motor is fabricated as a functional prototype to experimentally investigate its operational feasibility and working performances. The observed experimental results obviously demonstrate the unique driving characteristics and practical applicability of the proposed motor.",
"title": ""
},
{
"docid": "c033365f254fa5bdc53dd179ded3fbe9",
"text": "Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs a significant number of pixel-level annotated data, which is often unavailable. To address this lack of annotations, in this paper, we leverage, on one hand, a massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, which, in turn, improves multiclass pixel classification. To ensure a higher quality of generated images by GANs with consequently improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We test our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance compared to state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "c04991d45762b4a3fcc247f18eca34c3",
"text": "We present a system for activity recognition from passive RFID data using a deep convolutional neural network. We directly feed the RFID data into a deep convolutional neural network for activity recognition instead of selecting features and using a cascade structure that first detects object use from RFID data followed by predicting the activity. Because our system treats activity recognition as a multi-class classification problem, it is scalable for applications with large number of activity classes. We tested our system using RFID data collected in a trauma room, including 14 hours of RFID data from 16 actual trauma resuscitations. Our system outperformed existing systems developed for activity recognition and achieved similar performance with process-phase detection as systems that require wearable sensors or manually-generated input. We also analyzed the strengths and limitations of our current deep learning architecture for activity recognition from RFID data.",
"title": ""
},
{
"docid": "03ec1981458bdc35ac3eb1dfaf1e5b82",
"text": "BACKGROUND\nWe aimed to assess medical students' empathy and its associations with gender, stage of medical school, quality of life and burnout.\n\n\nMETHOD\nA cross-sectional, multi-centric (22 medical schools) study that employed online, validated, self-reported questionnaires on empathy (Interpersonal Reactivity Index), quality of life (The World Health Organization Quality of Life Assessment) and burnout (the Maslach Burnout Inventory) in a random sample of medical students.\n\n\nRESULTS\nOut of a total of 1,650 randomly selected students, 1,350 (81.8%) completed all of the questionnaires. Female students exhibited higher dispositional empathic concern and experienced more personal distress than their male counterparts (p<0.05; d ≥ 0.5). There were minor differences in the empathic dispositions of students in different stages of their medical training (p<0.05; f<0.25). Female students had slightly lower scores for physical and psychological quality of life than male students (p<0.05; d<0.5). Female students scored higher on emotional exhaustion and lower on depersonalization than male students (p<0.001; d<0.5). Students in their final stage of medical school had slightly higher scores for emotional exhaustion, depersonalization and personal accomplishment (p<0.05; f<0.25). Gender (β = 0.27; p<0.001) and perspective taking (β = 0.30; p<0.001) were significant predictors of empathic concern scores. Depersonalization was associated with lower empathic concern (β = -0.18) and perspective taking (β = -0.14) (p<0.001). Personal accomplishment was associated with higher perspective taking (β = 0.21; p<0.001) and lower personal distress (β = -0.26; p<0.001) scores.\n\n\nCONCLUSIONS\nFemale students had higher empathic concern and personal distress dispositions. The differences in the empathy scores of students in different stages of medical school were small. Among all of the studied variables, personal accomplishment held the most important association with decreasing personal distress and was also a predicting variable for perspective taking.",
"title": ""
},
{
"docid": "7002ccec7f0959ec6faf81f924aa23e5",
"text": "Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a human portrait. Previous techniques rely on human faces for this inference, based on spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased and relit images are unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to infer light occlusion in the SH formulation directly. Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel. The main difficulty in this inference is the lack of training datasets compared to unlimited variations of human portraits. Surprisingly, geometric information including occlusion can be inferred plausibly even with a small dataset of synthesized human figures, by carefully preparing the dataset so that the CNNs can exploit the data coherency. Our method accomplishes more realistic relighting than the occlusion-ignored formulation.",
"title": ""
},
{
"docid": "f6388d37976740ebb789e7d5f6c072f1",
"text": "With the advent of image and video representation of visual scenes in digital computer, subsequent necessity of vision-substitution representation of a given image is felt. The medium for non-visual representation of an image is chosen to be sound due to well developed auditory sensing ability of human beings and wide availability of cheap audio hardware. Visionary information of an image can be conveyed to blind and partially sighted persons through auditory representation of the image within some of the known limitations of human hearing system. The research regarding image sonification has mostly evolved through last three decades. The paper also discusses in brief about the reverse mapping, termed as sound visualization. This survey approaches to summarize the methodologies and issues of the implemented and unimplemented experimental systems developed for subjective sonification of image scenes and let researchers accumulate knowledge about the previous direction of researches in this domain.",
"title": ""
},
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
},
{
"docid": "7dc70719e59a1db0884782c6db2c7081",
"text": "River cities require a management approach based on resilience to floods rather than on resistance. Resisting floods by means of levees, dams, and channelization neglects inherent uncertainties arising from human–nature couplings and fails to address the extreme events that are expected to increase with climate change, and is thereby not a reliable approach to long-term flood safety. By applying resilience theory to address system persistence through changes, I develop a theory on “urban resilience to floods” as an alternative framework for urban flood hazard management. Urban resilience to floods is defined as a city’s capacity to tolerate flooding and to reorganize should physical damage and socioeconomic disruption occur, so as to prevent deaths and injuries and maintain current socioeconomic identity. It derives from living with periodic floods as learning opportunities to prepare the city for extreme ones. The theory of urban resilience to floods challenges the conventional wisdom that cities cannot live without flood control, which in effect erodes resilience. To operationalize the theory for planning practice, a surrogate measure—the percent floodable area—is developed for assessing urban resilience to floods. To enable natural floodplain functions to build urban resilience to floods, flood adaptation is advocated in order to replace flood control for mitigating flood hazards.",
"title": ""
},
{
"docid": "73edaa7319dcf225c081f29146bbb385",
"text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.",
"title": ""
},
{
"docid": "5d15ba47aaa29f388328824fa592addc",
"text": "Breast cancer continues to be a significant public health problem in the world. The diagnosing mammography method is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult for radiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. This paper describes a new method for mammographic image enhancement and denoising based on wavelet transform and homomorphic filtering. The mammograms are acquired from the Faculty of Medicine of the University of Akdeniz and the University of Istanbul in Turkey. Firstly wavelet transform of the mammograms is obtained and the approximation coefficients are filtered by homomorphic filter. Then the detail coefficients of the wavelet associated with noise and edges are modeled by Gaussian and Laplacian variables, respectively. The considered coefficients are compressed and enhanced using these variables with a shrinkage function. Finally using a proposed adaptive thresholding the fine details of the mammograms are retained and the noise is suppressed. The preliminary results of our work indicate that this method provides much more visibility for the suspicious regions.",
"title": ""
},
{
"docid": "e808fa6ebe5f38b7672fad04c5f43a3a",
"text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.",
"title": ""
},
{
"docid": "45c680911d97163839dda69d374399b7",
"text": "The process of identifying radio transmitters by examining their unique transient characteristics at the beginning of transmission is called RF fingerprinting. The security of wireless networks can be enhanced by challenging a user to prove its identity if the fingerprint of a network device is unidentified or deemed to be a threat. This paper addresses the problem of identifying an individual node in a wireless network by means of its RF fingerprint. A complete identification system is presented, including data acquisition, transient detection, RF fingerprint extraction, and classification subsystems. The classification performance of the proposed system has been evaluated from experimental data. It is demonstrated that the RF fingerprinting technique can be used as an additional tool to enhance the security of wireless networks.",
"title": ""
},
{
"docid": "9dacfccbbaa75947e4f4c09f6d54ed9e",
"text": "In New Light commercial vehicle development, Engine is mounted at rear to have low Engine Noise and Vibration inside cabin. At the same time there is a need of high load carrying rear suspension to suit market requirement. In this paper complete design of leaf spring rear suspension for rear engine is discussed.This is nontraditional type of suspension with leaf spring application for rear engine vehicle. Traditionally, for light commercial vehicles, Engine is placed at front/middle giving huge space for traditional rear axle with differential inside. Design of rear suspension is verified and validated successfully for durability and handling by doing finite element analysis and testing.",
"title": ""
},
{
"docid": "8af844944f6edee4c271d73a552dc073",
"text": "Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",
"title": ""
},
{
"docid": "2a6c7baa220e0c4267bebe4ea03a241b",
"text": "Android app repackaging threatens the health of application markets, as repackaged apps, besides stealing revenue for honest developers, are also a source of malware distribution. Techniques that rely on visual similarity of Android apps recently emerged as a way to tackle the repackaging detection problem, as code-based detection techniques often fail in terms of efficiency, and effectiveness when obfuscation is applied [19,21]. Among such techniques, the resource-based repackaging detection approach that compares sets of files included in apks has arguably the best performance [20,17,10]. Yet, this approach has not been previously validated on a dataset of repackaged apps. In this paper we report on our evaluation of the approach, and present substantial improvements to it. Our experiments show that the stateof-art tools applying this technique rely on too restrictive thresholds. Indeed, we demonstrate that a very low proportion of identical resource files in two apps is a reliable evidence for repackaging. Furthermore, we have shown that the Overlap similarity score performs better than the Jaccard similarity coefficient used in previous works. By applying machine learning techniques, we give evidence that considering separately the included resource file types significantly improves the detection accuracy of the method. Experimenting with a balanced dataset of more than 2700 app pairs, we show that with our enhancements it is possible to achieve the F-measure of 0.9919.",
"title": ""
},
{
"docid": "9296a908902929efb31e030e9bc771f7",
"text": "Available online 15 October 2011 This research examines and measures the outcomes of electronic customer relationship management (e-CRM) system implementation in the Thai banking industry from customers' perspectives. Because most e-CRM implementations cannot be directly seen or recognised by customers, a literature review and interviews with experts in the Thai banking industry were used to develop a new construct called ‘customer-based service attributes’ to measure e-CRM outcomes from customers' perspectives. A full-scale field survey of 684 customers of Thai commercial banks was then conducted. A service attribute model and a model that combined relationship quality and outcome were constructed, and their validity and reliability was confirmed. Analysis of the results by using structural equation modelling (SEM) illustrated that e-CRM implementation has a statistically significant positive relationship with customer-based service attributes and with the quality and outcome of customer–bank relationships as well as an indirect effect on relationship quality and outcome through customer-based service attributes. © 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7e1438d99cf737335fbdc871ecaa1486",
"text": "Based on LDA(Latent Dirichlet Allocation) topic model, a generative model for multi-document summarization, namely Titled-LDA that simultaneously models the content of documents and the titles of document is proposed. This generative model represents each document with a mixture of topics, and extends these approaches to title modeling by allowing the mixture weights for topics to be determined by the titles of the document. In the mixing stage, the algorithm can learn the weight in an adaptive asymmetric learning way based on two kinds of information entropies. In this way, the final model incorporated the title information and the content information appropriately, which helped the performance of summarization. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on DUC2002 corpus.",
"title": ""
},
{
"docid": "5f4d10a1a180f6af3d35ca117cd4ee19",
"text": "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.",
"title": ""
}
] |
scidocsrr
|
9ae3c226bbce6301188afb3b0065f06e
|
The Blockchain Consensus Layer and BFT
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "8574612823cccbb5f8bcc80532dae74e",
"text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.",
"title": ""
}
] |
[
{
"docid": "a388a599ba23b20d865e7f3c986124ff",
"text": "EFFECTS OF MILD DEHYDRATION ON THERMOREGULATION, PERFORMANCE AND MENTAL FATIGUE DURING AN ICE HOCKEY SCRIMMAGE Mark Edward Linseman Advisor: University of Guelph, 2011 Professor L.L. Spriet This study investigated the effects of progressive dehydration by 1.5-2.0% body mass (BM) (NF) on core temperature (Tc), heart rate (HR), on-ice performance, and mental fatigue during a 70-min scrimmage, compared to maintaining BM with a carbohydrate-electrolyte solution (CES). Compared to CES, Tc was significantly higher throughout the scrimmage in NF. Players in NF had reduced mean skating speed and time at high effort between 30-50 min of the scrimmage. Players in NF committed more puck turnovers and completed a lower percentage of passes in the last 20 min of play. Post-scrimmage shuttle skating time was higher in NF. Hockey fatigue questionnaire total score and Profile of Mood States fatigue score was higher in NF. The results indicate that mild dehydration compared to maintaining BM with a CES resulted in increased Tc, decreased skating and puck handling performance, and increased mental fatigue during an ice hockey scrimmage.",
"title": ""
},
{
"docid": "fe529aab49b0c985e40bab3ab0e0582c",
"text": "A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.",
"title": ""
},
{
"docid": "bb6192715b3f890acae147962594288b",
"text": "A continuous-time common-mode feedback (CMFB) network consisting of two unity-gain buffers and two passive resistors is introduced in this paper. According to the comparison with a previous implementation, the common-mode control structure presented shows an improved linearity performance as well as a higher immunity to device mismatches. The CMFB circuit has been included in the design of a fully differential voltage buffer, based on a fully differential difference amplifier. Simulated results, obtained in a 0.35-mum standard CMOS technology, are provided in order to show the performance of the proposed approach.",
"title": ""
},
{
"docid": "c81e420ce3c6d215cdd0da0213cda47d",
"text": "We show inapproximability results concerning minimization of nondeterministic finite automata (nfa’s) as well as regular expressions relative to given nfa’s, regular expressions or deterministic finite automata (dfa’s). We show that it is impossible to efficiently minimize a given nfa or regular expression with n states, transitions, resp. symbols within the factor o(n), unless P = PSPACE. Our inapproximability results for a given dfa with n states are based on cryptographic assumptions and we show that any efficient algorithm will have an approximation factor of at least n poly(log n) . Our setup also allows us to analyze the minimum consistent dfa problem. Classification: Automata and Formal Languages, Computational Complexity, Approximability",
"title": ""
},
{
"docid": "fc522482dbbcdeaa06e3af9a2f82b377",
"text": "Background/Objectives:As rates of obesity have increased throughout much of the world, so too have bias and prejudice toward people with higher body weight (that is, weight bias). Despite considerable evidence of weight bias in the United States, little work has examined its extent and antecedents across different nations. The present study conducted a multinational examination of weight bias in four Western countries with comparable prevalence rates of adult overweight and obesity.Methods:Using comprehensive self-report measures with 2866 individuals in Canada, the United States, Iceland and Australia, the authors assessed (1) levels of explicit weight bias (using the Fat Phobia Scale and the Universal Measure of Bias) and multiple sociodemographic predictors (for example, sex, age, race/ethnicity and educational attainment) of weight-biased attitudes and (2) the extent to which weight-related variables, including participants’ own body weight, personal experiences with weight bias and causal attributions of obesity, play a role in expressions of weight bias in different countries.Results:The extent of weight bias was consistent across countries, and in each nation attributions of behavioral causes of obesity predicted stronger weight bias, as did beliefs that obesity is attributable to lack of willpower and personal responsibility. In addition, across all countries the magnitude of weight bias was stronger among men and among individuals without family or friends who had experienced this form of bias.Conclusions:These findings offer new insights and important implications regarding sociocultural factors that may fuel weight bias across different cultural contexts, and for targets of stigma-reduction efforts in different countries.",
"title": ""
},
{
"docid": "cf8adc9a1b4b95d9b3d02355d523baf6",
"text": "Autoerotic asphyxia is presented in literature review form. Etiology, prevalence statistics, and a profile of AEA participants is provided. The author identifies autoerotic asphyxia as a form of sub-intentional suicide. Warning signs of AEA are presented. Possible sources of misinformation are given. Prevention and education recommendations for administrators, faculty, and parents are provided. A suggested reading list is provided.",
"title": ""
},
{
"docid": "9a29bcb5ca21c33140a199763ab4bc5f",
"text": "The Stadtpilot project aims at autonomous driving on Braunschweig's inner city ring road. For this purpose, an autonomous vehicle called “Leonie” has been developed. In October 2010, after two years of research, “Leonie's” abilities were presented in a public demonstration. This vehicle is one of the first worldwide to show the ability of driving autonomously in real urban traffic scenarios. This paper describes the legal issues and the homologation process for driving autonomously in public traffic in Braunschweig, Germany. It also dwells on the Safety Concept, the system architecture and current research activities.",
"title": ""
},
{
"docid": "eae04aa2942bfd3752fb596f645e2c2e",
"text": "PURPOSE\nHigh fasting blood glucose (FBG) can lead to chronic diseases such as diabetes mellitus, cardiovascular and kidney diseases. Consuming probiotics or synbiotics may improve FBG. A systematic review and meta-analysis of controlled trials was conducted to clarify the effect of probiotic and synbiotic consumption on FBG levels.\n\n\nMETHODS\nPubMed, Scopus, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature databases were searched for relevant studies based on eligibility criteria. Randomized or non-randomized controlled trials which investigated the efficacy of probiotics or synbiotics on the FBG of adults were included. Studies were excluded if they were review articles and study protocols, or if the supplement dosage was not clearly mentioned.\n\n\nRESULTS\nA total of fourteen studies (eighteen trials) were included in the analysis. Random-effects meta-analyses were conducted for the mean difference in FBG. Overall reduction in FBG observed from consumption of probiotics and synbiotics was borderline statistically significant (-0.18 mmol/L 95 % CI -0.37, 0.00; p = 0.05). Neither probiotic nor synbiotic subgroup analysis revealed a significant reduction in FBG. The result of subgroup analysis for baseline FBG level ≥7 mmol/L showed a reduction in FBG of 0.68 mmol/L (-1.07, -0.29; ρ < 0.01), while trials with multiple species of probiotics showed a more pronounced reduction of 0.31 mmol/L (-0.58, -0.03; ρ = 0.03) compared to single species trials.\n\n\nCONCLUSION\nThis meta-analysis suggests that probiotic and synbiotic supplementation may be beneficial in lowering FBG in adults with high baseline FBG (≥7 mmol/L) and that multispecies probiotics may have more impact on FBG than single species.",
"title": ""
},
{
"docid": "ea3b6ec7e56d8924c24e001383c330c5",
"text": "Leveraging class semantic descriptions and examples of known objects, zero-shot learning makes it possible to train a recognition model for an object class whose examples are not available. In this paper, we propose a novel zero-shot learning model that takes advantage of clustering structures in the semantic embedding space. The key idea is to impose the structural constraint that semantic representations must be predictive of the locations of their corresponding visual exemplars. To this end, this reduces to training multiple kernel-based regressors from semantic representation-exemplar pairs from labeled data of the seen object categories. Despite its simplicity, our approach significantly outperforms existing zero-shot learning methods on standard benchmark datasets, including the ImageNet dataset with more than 20,000 unseen categories.",
"title": ""
},
{
"docid": "191f8f2e1bee4f21319a82f7d4acd59f",
"text": "Money laundering is a critical step in the cyber crime process which is experiencing some changes as hackers and their criminal colleagues continually alter and optimize payment mechanisms. Conducting quantitative research on underground laundering activity poses an inherent challenge: Bad guys and their banks don’t share information on criminal pursuits. However, by analyzing forums, we have identified two growth areas in money laundering:",
"title": ""
},
{
"docid": "41cf61f3ab9d8fcad961c5d4c8578946",
"text": "A neural network can learn color constancy, defined here as the ability to estimate the chromaticity of a scene's overall illumination. We describe a multilayer neural network that is able to recover the illumination chromaticity given only an image of the scene. The network is previously trained by being presented with a set of images of scenes and the chromaticities of the corresponding scene illuminants. Experiments with real images show that the network performs better than previous color constancy methods. In particular, the performance is better for images with a relatively small number of distinct colors. The method has application to machine vision problems such as object recognition, where illumination-independent color descriptors are required, and in digital photography, where uncontrolled scene illumination can create an unwanted color cast in a photograph.",
"title": ""
},
{
"docid": "1a42fad92b263286f2360eff7990e5c8",
"text": "Schottky-barrier diodes (SBD's) fabricated in CMOS without process modification are shown to be suitable for active THz imaging applications. Using a compact passive-pixel array architecture, a fully-integrated 280-GHz 4 × 4 imager is demonstrated. At 1-MHz input modulation frequency, the measured peak responsivity is 5.1 kV/W with ±20% variation among the pixels. The measured minimum NEP is 29 pW/Hz1/2. Additionally, an 860-GHz SBD detector is implemented by reducing the number of unit cells in the diode, and by exploiting the efficiency improvement of patch antenna with frequency. The measured NEP is 42 pW/Hz1/2 at 1-MHz modulation frequency. This is competitive to the best reported performance of MOSFET-based pixel measured without attaching an external silicon lens (66 pW/Hz1/2 at 1 THz and 40 pW/Hz1/2 at 650 GHz). Given that incorporating the 280-GHz detector into an array increased the NEP by ~ 20%, the 860-GHz imager array should also have the similar NEP as that for an individual detector. The circuits were utilized in a setup that requires neither mirrors nor lenses to form THz images. These suggest that an affordable and portable fully-integrated CMOS THz imager is possible.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "b889b863e0344361be7d8eeafca872c5",
"text": "This paper presents a singular-value-based semi-fragile watermarking scheme for image content authentication. The proposed scheme generates secure watermark by performing a logical operation on content-dependent watermark generated by a singular-value-based sequence and contentindependent watermark generated by a private-key-based sequence. It next employs the adaptive quantization method to embed secure watermark in approximation subband of each 4 4 block to generate the watermarked image. The watermark extraction process then extracts watermark using the parity of quantization results from the probe image. The authentication process starts with regenerating secure watermark following the same process. It then constructs error maps to compute five authentication measures and performs a three-level process to authenticate image content and localize tampered areas. Extensive experimental results show that the proposed scheme outperforms five peer schemes and its two variant systems and is capable of identifying intentional tampering, incidental modification, and localizing tampered regions under mild to severe content-preserving modifications. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5de07054546347e150aeabe675234966",
"text": "Smart farming is seen to be the future of agriculture as it produces higher quality of crops by making farms more intelligent in sensing its controlling parameters. Analyzing massive amount of data can be done by accessing and connecting various devices with the help of Internet of Things (IoT). However, it is not enough to have an Internet support and self-updating readings from the sensors but also to have a self-sustainable agricultural production with the use of analytics for the data to be useful. This study developed a smart hydroponics system that is used in automating the growing process of the crops using exact inference in Bayesian Network (BN). Sensors and actuators are installed in order to monitor and control the physical events such as light intensity, pH, electrical conductivity, water temperature, and relative humidity. The sensor values gathered were used to build the Bayesian Network in order to infer the optimum value for each parameter. A web interface is developed wherein the user can monitor and control the farm remotely via the Internet. Results have shown that the fluctuations in terms of the sensor values were minimized in the automatic control using BN as compared to the manual control. The yielded crop on the automatic control was 66.67% higher than the manual control which implies that the use of exact inference in BN aids in producing high-quality crops. In the future, the system can use higher data analytics and longer data gathering to improve the accuracy of inference.",
"title": ""
},
{
"docid": "990fb61d1135b05f88ae02eb71a6983f",
"text": "Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.",
"title": ""
},
{
"docid": "8b7f03d6bcea796e0d5b0154e28dc632",
"text": "This study intends to investigate factors affecting business employees’ behavioral intentions to use the elearning system. Combining the innovation diffusion theory (IDT) with the technology acceptance model (TAM), the present study proposes an extended technology acceptance model. The proposed model was tested with data collected from 552 business employees using the e-learning system in Taiwan. The results show that five perceptions of innovation characteristics significantly influenced employees’ e-learning system behavioral intention. The effects of the compatibility, complexity, relative advantage, and trialability on the perceived usefulness are significant. In addition, the effective of the complexity, relative advantage, trialability, and complexity on the perceived ease of use have a significant influence. Empirical results also provide strong support for the integrative approach. The findings suggest an extended model of TAM for the acceptance of the e-learning system, which can help organization decision makers in planning, evaluating and executing the use of e-learning systems.",
"title": ""
},
{
"docid": "c3d25395aff2ec6039b21bd2415bcf1f",
"text": "A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today’s digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks.",
"title": ""
},
{
"docid": "ebc342aaa0bba197e8e7f944c2ba7a23",
"text": "During the last decades huge amounts of data have been collected in clinical databases representing patients' health states (e.g., as laboratory results, treatment plans, medical reports). Hence, digital information available for patient-oriented decision making has increased drastically but is often scattered across different sites. As as solution, personal health record systems (PHRS) are meant to centralize an individual's health data and to allow access for the owner as well as for authorized health professionals. Yet, expert-oriented language, complex interrelations of medical facts and information overload in general pose major obstacles for patients to understand their own record and to draw adequate conclusions. In this context, recommender systems may supply patients with additional laymen-friendly information helping to better comprehend their health status as represented by their record. However, such systems must be adapted to cope with the specific requirements in the health domain in order to deliver highly relevant information for patients. They are referred to as health recommender systems (HRS). In this article we give an introduction to health recommender systems and explain why they are a useful enhancement to PHR solutions. Basic concepts and scenarios are discussed and a first implementation is presented. In addition, we outline an evaluation approach for such a system, which is supported by medical experts. The construction of a test collection for case-related recommendations is described. Finally, challenges and open issues are discussed.",
"title": ""
},
{
"docid": "05b785b92bd1fa66fa71c51065cff16f",
"text": "In this paper, we illustrate the possibility of developing strategies to carry out matrix computations on heterogeneous platforms which achieve native GPU performance on very large data sizes up to the capacity of the CPU memory. More specifically, we present a dense matrix multiplication strategy on a heterogeneous platform, specifically tailored for the case when the input is too large to fit on the device memory, which achieves near peak GPU performance. Our strategy involves the development of CUDA stream based software pipelines that effectively overlap PCIe data transfers with kernel executions. As a result, we are able to achieve over 1 and 2 TFLOPS performance on a single node using 1 and 2 GPUs respectively.",
"title": ""
}
] |
scidocsrr
|
e47fd0b898b48d517afb853c8106bdb7
|
Detection of NPK nutrients of soil using Fiber Optic Sensor *
|
[
{
"docid": "87343436b0ea16f9683360fd84506331",
"text": "Accurate measurements of soil macronutrients (i.e., nitrogen, phosphorus, and potassium) are needed for efficient agricultural production, including site-specific crop management (SSCM), where fertilizer nutrient application rates are adjusted spatially based on local requirements. Rapid, non-destructive quantification of soil properties, including nutrient levels, has been possible with optical diffuse reflectance sensing. Another approach, electrochemical sensing based on ion-selective electrodes or ion-selective field effect transistors, has been recognized as useful in real-time analysis because of its simplicity, portability, rapid response, and ability to directly measure the analyte with a wide range of sensitivity. Current sensor developments and related technologies that are applicable to the measurement of soil macronutrients for SSCM are comprehensively reviewed. Examples of optical and electrochemical sensors applied in soil analyses are given, while advantages and obstacles to their adoption are discussed. It is proposed that on-the-go vehicle-based sensing systems have potential for efficiently and rapidly characterizing variability of soil macronutrients within a field.",
"title": ""
}
] |
[
{
"docid": "e694d8429af455984a0ebde5ae10794a",
"text": "Huntington’s disease (HD) is a heredodegenerative neurological disorder with chorea and other hyperkinetic movement disorders being part of the disease spectrum. These along with cognitive and neurobehavioral manifestations contribute significantly to patient’s disability. Several classes of drugs have been used to treat the various symptoms of HD. These include typical and atypical neuroleptics along with dopamine depletors for treatment of chorea and antidepressants, GABA agonists, antiepileptic medications, cholinesterase inhibitors, antiglutamatergic drugs and botulinum toxin for treatment of other manifestations. Tetrabenazine (TBZ), a dopamine depleting medication was recently approved by the US FDA for treatment of chorea in HD. The purpose of this article is to briefly review information regarding HD and current treatments for chorea and specifically focus on TBZ and review the literature related to its use in HD chorea.",
"title": ""
},
{
"docid": "315b8a8c3942c05ef8c25d8f6b4f91b2",
"text": "This paper proposes a reinforcing method that refines the output layers of existing Recurrent Neural Network (RNN) language models. We refer to our proposed method as Input-to-Output Gate (IOG)1. IOG has an extremely simple structure, and thus, can be easily combined with any RNN language models. Our experiments on the Penn Treebank and WikiText-2 datasets demonstrate that IOG consistently boosts the performance of several different types of current topline RNN language models.",
"title": ""
},
{
"docid": "73940354496e29cbdeea127fb6a9da6b",
"text": "Available online 3 November 2011",
"title": ""
},
{
"docid": "6727eb68064f73c0dc97c15b8c6e0bf9",
"text": "With a focus on presenting information at the right time, the ubicomp community can benefit greatly from learning the most salient human measures of cognitive load. Cognitive load can be used as a metric to determine when or whether to interrupt a user. In this paper, we collected data from multiple sensors and compared their ability to assess cognitive load. Our focus is on visual perception and cognitive speed-focused tasks that leverage cognitive abilities common in ubicomp applications. We found that across all participants, the electrocardiogram median absolute deviation and median heat flux measurements were the most accurate at distinguishing between low and high levels of cognitive load, providing a classification accuracy of over 80% when used together. Our contribution is a real-time, objective, and generalizable method for assessing cognitive load in cognitive tasks commonly found in ubicomp systems and situations of divided attention.",
"title": ""
},
{
"docid": "7d5bcd40c0d5ac30b51c3747e41a4fa6",
"text": "We consider the following fundamental communication problem - there is data that is distributed among servers, and the servers want to compute the intersection of their data sets, e.g., the common records in a relational database. They want to do this with as little communication and as few messages (rounds) as possible. They are willing to use randomization, and fail with a tiny probability. Given a protocol for computing the intersection, it can also be used to compute the exact Jaccard similarity, the rarity, the number of distinct elements, and joins between databases. Computing the intersection is at least as hard as the set disjointness problem, which asks whether the intersection is empty. Formally, in the two-server setting, the players hold subsets S, T ⊆ [n]. In many realistic scenarios, the sizes of S and T are significantly smaller than n, so we impose the constraint that |S|, |T| ≤ k. We study the minimum number of bits the parties need to communicate in order to compute the intersection set S ∩ T, given a certain number r of messages that are allowed to be exchanged. While O(k log (n/k)) bits is achieved trivially and deterministically with a single message, we ask what is possible with more than one message and with randomization. We give a smooth communication/round tradeoff which shows that with O(log* k) rounds, O(k) bits of communication is possible, which improves upon the trivial protocol by an order of magnitude. This is in contrast to other basic problems such as computing the union or symmetric difference, for which Ω(k log(n/k)) bits of communication is required for any number of rounds. For two players, known lower bounds for the easier problem of set disjointness imply our algorithms are optimal up to constant factors in communication and number of rounds. We extend our protocols to $m$-player protocols, obtaining an optimal O(mk) bits of communication with a similarly small number of rounds.",
"title": ""
},
{
"docid": "772d1e7115f6b8570e07b7f9ade527a9",
"text": "We consider the control of interacting subsystems whose dynamics and constraints are decoupled, but whose state vectors are coupled non-separably in a single cost function of a finite horizon optimal control problem. For a given cost structure, we generate distributed optimal control problems for each subsystem and establish that a distributed receding horizon control implementation is stabilizing to a neighborhood of the objective state. The implementation requires synchronous updates and the exchange of the most recent optimal control trajectory between coupled subsystems prior to each update. The key requirements for stability are that each subsystem not deviate too far from the previous open-loop state trajectory, and that the receding horizon updates happen sufficiently fast. The venue of multi-vehicle formation stabilization is used to demonstrate the distributed implementation.",
"title": ""
},
{
"docid": "3529e60736ef94de53f5f8e604509fc7",
"text": "Surgical workflow recognition has numerous potential medical applications, such as the automatic indexing of surgical video databases and the optimization of real-time operating room scheduling, among others. As a result, surgical phase recognition has been studied in the context of several kinds of surgeries, such as cataract, neurological, and laparoscopic surgeries. In the literature, two types of features are typically used to perform this task: visual features and tool usage signals. However, the used visual features are mostly handcrafted. Furthermore, the tool usage signals are usually collected via a manual annotation process or by using additional equipment. In this paper, we propose a novel method for phase recognition that uses a convolutional neural network (CNN) to automatically learn features from cholecystectomy videos and that relies uniquely on visual information. In previous studies, it has been shown that the tool usage signals can provide valuable information in performing the phase recognition task. Thus, we present a novel CNN architecture, called EndoNet, that is designed to carry out the phase recognition and tool presence detection tasks in a multi-task manner. To the best of our knowledge, this is the first work proposing to use a CNN for multiple recognition tasks on laparoscopic videos. Experimental comparisons to other methods show that EndoNet yields state-of-the-art results for both tasks.",
"title": ""
},
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fc1adf6f1efdb168bbc5febd29aa09c1",
"text": "Biomedical named entity recognition (NER) is a fundamental task in text mining of medical documents and has many applications. Deep learning based approaches to this task have been gaining increasing attention in recent years as their parameters can be learned endto-end without the need for hand-engineered features. However, these approaches rely on high-quality labeled data, which is expensive to obtain. To address this issue, we investigate how to use unlabeled text data to improve the performance of NER models. Specifically, we train a bidirectional language model (BiLM) on unlabeled data and transfer its weights to “pretrain” an NER model with the same architecture as the BiLM, which results in a better parameter initialization of the NER model. We evaluate our approach on four benchmark datasets for biomedical NER and show that it leads to a substantial improvement in the F1 scores compared with the state-of-the-art approaches. We also show that BiLM weight transfer leads to a faster model training and the pretrained model requires fewer training examples to achieve a particular F1 score.",
"title": ""
},
{
"docid": "e50c07aa28cafffc43dd7eb29892f10f",
"text": "Recent approaches to the Automatic Postediting (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on the Transformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018 APE shared task, where we participated both in the PBSMT subtask (i.e. the correction of MT outputs from a phrase-based system) and in the NMT subtask (i.e. the correction of neural outputs). In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.",
"title": ""
},
{
"docid": "2d8f76cef3d0c11441bbc8f5487588cb",
"text": "Abstract. It seems natural to assume that the more It seems natural to assume that the more closely robots come to resemble people, the more likely they are to elicit the kinds of responses people direct toward each other. However, subtle flaws in appearance and movement only seem eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit a model of a human other but do not measure up to it. If so, a very humanlike robot may provide the best means of finding out what kinds of behavior are perceived as human, since deviations from a human other are more obvious. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that an uncanny robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability. An experiment, which borrows from the methods of terror management research, was performed to test this hypothesis. Across all questions subjects who were exposed to a still image of an uncanny humanlike robot had on average a heightened preference for worldview supporters and a diminished preference for worldview threats relative to the control group.",
"title": ""
},
{
"docid": "bb0364b6c8e0f8a9c41b30b03c308841",
"text": "BACKGROUND\nFinding duplicates is an important phase of systematic review. However, no consensus regarding the methods to find duplicates has been provided. This study aims to describe a pragmatic strategy of combining auto- and hand-searching duplicates in systematic review and to evaluate the prevalence and characteristics of duplicates.\n\n\nMETHODS AND FINDINGS\nLiteratures regarding portal vein thrombosis (PVT) and Budd-Chiari syndrome (BCS) were searched by the PubMed, EMBASE, and Cochrane library databases. Duplicates included one index paper and one or more redundant papers. They were divided into type-I (duplicates among different databases) and type-II (duplicate publications in different journals/issues) duplicates. For type-I duplicates, reference items were further compared between index and redundant papers. Of 10936 papers regarding PVT, 2399 and 1307 were identified as auto- and hand-searched duplicates, respectively. The prevalence of auto- and hand-searched redundant papers was 11.0% (1201/10936) and 6.1% (665/10936), respectively. They included 3431 type-I and 275 type-II duplicates. Of 11403 papers regarding BCS, 3275 and 2064 were identified as auto- and hand-searched duplicates, respectively. The prevalence of auto- and hand-searched redundant papers was 14.4% (1640/11403) and 9.1% (1039/11403), respectively. They included 5053 type-I and 286 type-II duplicates. Most of type-I duplicates were identified by auto-searching method (69.5%, 2385/3431 in PVT literatures; 64.6%, 3263/5053 in BCS literatures). Nearly all type-II duplicates were identified by hand-searching method (94.9%, 261/275 in PVT literatures; 95.8%, 274/286 in BCS literatures). Compared with those identified by auto-searching method, type-I duplicates identified by hand-searching method had a significantly higher prevalence of wrong items (47/2385 versus 498/1046, p<0.0001 in PVT literatures; 30/3263 versus 778/1790, p<0.0001 in BCS literatures). Most of wrong items originated from EMBASE database.\n\n\nCONCLUSION\nGiven the inadequacy of a single strategy of auto-searching method, a combined strategy of auto- and hand-searching methods should be employed to find duplicates in systematic review.",
"title": ""
},
{
"docid": "b454900556cc392edd39b888de746298",
"text": "As developers of a highly multilingual named entity recognition (NER) system, we face an evaluation resource bottleneck problem: we need evaluation data in many languages, the annotation should not be too time-consuming, and the evaluation results across languages should be comparable. We solve the problem by automatically annotating the English version of a multi-parallel corpus and by projecting the annotations into all the other language versions. For the translation of English entities, we use a phrase-based statistical machine translation system as well as a lookup of known names from a multilingual name database. For the projection, we incrementally apply different methods: perfect string matching, perfect consonant signature matching and edit distance similarity. The resulting annotated parallel corpus will be made available for reuse.",
"title": ""
},
{
"docid": "4cb540a7d4e95db595d7cc17b3616d00",
"text": "The design tradeoffs of the class-D amplifier (CDA) for driving piezoelectric (PZ) speakers are presented, including efficiency, linearity, and electromagnetic interference. An implementation is proposed to achieve high efficiency in the CDA architecture for PZ speakers to extend battery life in mobile devices. A self-oscillating closed-loop architecture is used to obviate the need for a carrier signal generator to achieve low power consumption. The use of stacked-cascode CMOS transistors at the H-bridge output stage provides low-input capacitance to allow high-switching frequency to improve linearity with high efficiency. Moreover, the CDA monolithic implementation achieves 18 VPP output voltage swing in a low-voltage CMOS technology without requiring expensive high-voltage semiconductor devices. The prototype experimental results achieved a minimum THD + N of 0.025%, and a maximum efficiency of 96%. Compared to available CDA for PZ speakers, the proposed CDA achieved higher linearity, lower power consumption, and higher efficiency.",
"title": ""
},
{
"docid": "8f01d2e70ec5da655418a6864e94b932",
"text": "Cloud storage services allow users to outsource their data to cloud servers to save on local data storage costs. However, unlike using local storage devices, users don't physically own the data stored on cloud servers and can't be certain about the integrity of the cloud-stored data. Many public verification schemes have been proposed to allow a third-party auditor to verify the integrity of outsourced data. However, most of these schemes assume that the auditors are honest and reliable, so are vulnerable to malicious auditors. Moreover, in most of these schemes, an external adversary could modify the outsourced data and tamper with the interaction messages between the cloud server and the auditor, thus invalidating the outsourced data integrity verification. This article proposes an efficient and secure public verification of data integrity scheme that protects against external adversaries and malicious auditors. The proposed scheme adopts a random masking technique to protect against external adversaries, and requires users to audit auditors' behaviors to prevent malicious auditors from fabricating verification results. It uses Bitcoin to construct unbiased challenge messages to thwart collusion between malicious auditors and cloud servers. A performance analysis demonstrates that the proposed scheme is efficient in terms of the user's auditing overhead.",
"title": ""
},
{
"docid": "6b2ef609c474b015b21e903e953efdb9",
"text": "This paper reviews applications of the lattice-Boltzmann method to simulations of particle-fluid suspensions. We first summarize the available simulation methods for colloidal suspensions together with some of the important applications of these methods, and then describe results from lattice-gas and latticeBoltzmann simulations in more detail. The remainder of the paper is an update of previously published work, (69, 70) taking into account recent research by ourselves and other groups. We describe a lattice-Boltzmann model that can take proper account of density fluctuations in the fluid, which may be important in describing the short-time dynamics of colloidal particles. We then derive macrodynamical equations for a collision operator with separate shear and bulk viscosities, via the usual multi-time-scale expansion. A careful examination of the second-order equations shows that inclusion of an external force, such as a pressure gradient, requires terms that depend on the eigenvalues of the collision operator. Alternatively, the momentum density must be redefined to include a contribution from the external force. Next, we summarize recent innovations and give a few numerical examples to illustrate critical issues. Finally, we derive the equations for a lattice-Boltzmann model that includes transverse and longitudinal fluctuations in momentum. The model leads to a discrete version of the Green–Kubo relations for the shear and bulk viscosity, which agree with the viscosities obtained from the macro-dynamical analysis. We believe that inclusion of longitudinal fluctuations will improve the equipartition of energy in lattice-Boltzmann simulations of colloidal suspensions.",
"title": ""
},
{
"docid": "226d474f5d0278f81bcaf7203706486b",
"text": "Human pose estimation is a well-known computer vision problem that receives intensive research interest. The reason for such interest is the wide range of applications that the successful estimation of human pose offers. Articulated pose estimation includes real time acquisition, analysis, processing and understanding of high dimensional visual information. Ensemble learning methods operating on hand-engineered features have been commonly used for addressing this task. Deep learning exploits representation learning methods to learn multiple levels of representations from raw input data, alleviating the need to hand-crafted features. Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localization, detection. In this paper, the pose estimation task is formulated as an offset joint regression problem. The 3D joints positions are accurately detected from a single raw depth image using a deep convolutional neural networks model. The presented method relies on the utilization of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. Analysis and experimental results demonstrate the generalization performance and the real time successful application of the proposed method.",
"title": ""
},
{
"docid": "6b693af5ed67feab686a9a92e4329c94",
"text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.",
"title": ""
},
{
"docid": "8255146164ff42f8755d8e74fd24cfa1",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
},
{
"docid": "750e7bd1b23da324a0a51d0b589acbfb",
"text": "Various powerful people detection methods exist. Surprisingly, most approaches rely on static image features only despite the obvious potential of motion information for people detection. This paper systematically evaluates different features and classifiers in a sliding-window framework. First, our experiments indicate that incorporating motion information improves detection performance significantly. Second, the combination of multiple and complementary feature types can also help improve performance. And third, the choice of the classifier-feature combination and several implementation details are crucial to reach best performance. In contrast to many recent papers experimental results are reported for four different datasets rather than using a single one. Three of them are taken from the literature allowing for direct comparison. The fourth dataset is newly recorded using an onboard camera driving through urban environment. Consequently this dataset is more realistic and more challenging than any currently available dataset.",
"title": ""
}
] |
scidocsrr
|
d6f6ef29d39924604fb09596eb6aeb37
|
An extension of the technology acceptance model in an ERP implementation environment
|
[
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "bd13f54cd08fe2626fe8de4edce49197",
"text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "37b97f66230fb292f585d0413af48986",
"text": "In this paper, we notice that sparse and low-rank structures arise in the context of many collaborative filtering applications where the underlying graphs have block-diagonal adjacency matrices. Therefore, we propose a novel Sparse and Low-Rank Linear Method (Lor SLIM) to capture such structures and apply this model to improve the accuracy of the Top-N recommendation. Precisely, a sparse and low-rank aggregation coefficient matrix W is learned from Lor SLIM by solving an l1-norm and nuclear norm regularized optimization problem. We also develop an efficient alternating augmented Lagrangian method (ADMM) to solve the optimization problem. A comprehensive set of experiments is conducted to evaluate the performance of Lor SLIM. The experimental results demonstrate the superior recommendation quality of the proposed algorithm in comparison with current state-of-the-art methods.",
"title": ""
},
{
"docid": "25f0871346c370db4b26aecd08a9d75e",
"text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.",
"title": ""
},
{
"docid": "aeed0f9595c9b40bb03c95d4624dd21c",
"text": "Most research in primary and secondary computing education has focused on understanding learners within formal classroom communities, leaving aside the growing number of promising informal online programming communities where young learners contribute, comment, and collaborate on programs. In this paper, we examined trends in computational participation in Scratch, an online community with over 1 million registered youth designers primarily 11-18 years of age. Drawing on a random sample of 5,000 youth programmers and their activities over three months in early 2012, we examined the quantity of programming concepts used in projects in relation to level of participation, gender, and account age of Scratch programmers. Latent class analyses revealed four unique groups of programmers. While there was no significant link between level of online participation, ranging from low to high, and level of programming sophistication, the exception was a small group of highly engaged users who were most likely to use more complex programming concepts. Groups who only used few of the more sophisticated programming concepts, such as Booleans, variables and operators, were identified as Scratch users new to the site and girls. In the discussion we address the challenges of analyzing young learners' programming in informal online communities and opportunities for designing more equitable computational participation.",
"title": ""
},
{
"docid": "9f21af3bc0955dcd9a05898f943f54ad",
"text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.",
"title": ""
},
{
"docid": "981634bc9b96eba12fd07e8960d02c2d",
"text": "This paper presents the existing legal frameworks, professional guidelines and other documents related to the conditions and extent of the disclosure of genetic information by physicians to at-risk family members. Although the duty of a physician regarding disclosure of genetic information to a patient’s relatives has only been addressed by few legal cases, courts have found such a duty under some circumstances. Generally, disclosure should not be permitted without the patient’s consent. Yet, due to the nature of genetic information, exceptions are foreseen, where treatment and prevention are available. This duty to warn a patient’s relative is also supported by some professional and policy organizations that have addressed the issue. Practice guidelines with a communication and intervention plan are emerging, providing physicians with tools that allow them to assist patients in their communication with relatives without jeopardizing their professional liability. Since guidelines aim to improve the appropriateness of medical practice and consequently to better serve the interests of patients, it is important to determine to what degree they document the ‘best practice’ standards. Such an analysis is an essential step to evaluate the different approaches permitting the disclosure of genetic information to family members.",
"title": ""
},
{
"docid": "6897b2842b041e75278aec7bc03ec870",
"text": "PURPOSE\nThe optimal treatment of systemic sclerosis (SSc) is a challenge because the pathogenesis of SSc is unclear and it is an uncommon and clinically heterogeneous disease affecting multiple organ systems. The aim of the European League Against Rheumatism (EULAR) Scleroderma Trials and Research group (EUSTAR) was to develop evidence-based, consensus-derived recommendations for the treatment of SSc.\n\n\nMETHODS\nTo obtain and maintain a high level of intrinsic quality and comparability of this approach, EULAR standard operating procedures were followed. The task force comprised 18 SSc experts from Europe, the USA and Japan, two SSc patients and three fellows for literature research. The preliminary set of research questions concerning SSc treatment was provided by 74 EUSTAR centres.\n\n\nRESULTS\nBased on discussion of the clinical research evidence from published literature, and combining this with current expert opinion and clinical experience, 14 recommendations for the treatment of SSc were formulated. The final set includes the following recommendations: three on SSc-related digital vasculopathy (Raynaud's phenomenon and ulcers); four on SSc-related pulmonary arterial hypertension; three on SSc-related gastrointestinal involvement; two on scleroderma renal crisis; one on SSc-related interstitial lung disease and one on skin involvement. Experts also formulated several questions for a future research agenda.\n\n\nCONCLUSIONS\nEvidence-based, consensus-derived recommendations are useful for rheumatologists to help guide treatment for patients with SSc. These recommendations may also help to define directions for future clinical research in SSc.",
"title": ""
},
{
"docid": "2c2574e1eb29ad45bedf346417c85e2d",
"text": "Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol.",
"title": ""
},
{
"docid": "557694b6db3f20adc700876d75ad7720",
"text": "Unseen Action Recognition (UAR) aims to recognise novel action categories without training examples. While previous methods focus on inner-dataset seen/unseen splits, this paper proposes a pipeline using a large-scale training source to achieve a Universal Representation (UR) that can generalise to a more realistic Cross-Dataset UAR (CDUAR) scenario. We first address UAR as a Generalised Multiple-Instance Learning (GMIL) problem and discover 'building-blocks' from the large-scale ActivityNet dataset using distribution kernels. Essential visual and semantic components are preserved in a shared space to achieve the UR that can efficiently generalise to new datasets. Predicted UR exemplars can be improved by a simple semantic adaptation, and then an unseen action can be directly recognised using UR during the test. Without further training, extensive experiments manifest significant improvements over the UCF101 and HMDB51 benchmarks.",
"title": ""
},
{
"docid": "3d401d8d3e6968d847795ccff4646b43",
"text": "In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks.",
"title": ""
},
{
"docid": "ca509048385b8cf28bd7b89c685f21b2",
"text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.",
"title": ""
},
{
"docid": "16a0329d2b7a6995a48bdef0e845658a",
"text": "Digital market has never been so unstable due to more and more demanding users and new disruptive competitors. CEOs from most of industries investigate digitalization opportunities. Through a Systematic Literature Review, we found that digital transformation is more than just a technological shift. According to this study, these transformations have had an impact on the business models, the operational processes and the end-users experience. Considering the richness of this topic, we had proposed a research agenda of digital transformation in a managerial perspective.",
"title": ""
},
{
"docid": "05a5e3849c9fca4d788aa0210d8f7294",
"text": "The growth of mobile phone users has lead to a dramatic increasing of SMS spam messages. Recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. In practice, fighting such plague is difficult by several factors, including the lower rate of SMS that has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software. Probably, one of the major concerns in academic settings is the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. Moreover, traditional content-based filters may have their performance seriously degraded since SMS messages are fairly short and their text is generally rife with idioms and abbreviations. In this paper, we present details about a new real, public and non-encoded SMS spam collection that is the largest one as far as we know. Moreover, we offer a comprehensive analysis of such dataset in order to ensure that there are no duplicated messages coming from previously existing datasets, since it may ease the task of learning SMS spam classifiers and could compromise the evaluation of methods. Additionally, we compare the performance achieved by several established machine learning techniques. In summary, the results indicate that the procedure followed to build the collection does not lead to near-duplicates and, regarding the classifiers, the Support Vector Machines outperforms other evaluated techniques and, hence, it can be used as a good baseline for further comparison. Keywords—Mobile phone spam; SMS spam; spam filtering; text categorization; classification.",
"title": ""
},
{
"docid": "bc2bc8b2d9db3eb14e126c627248a66a",
"text": "With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with enhanced and efficient methodologies and techniques. This paper aims to discuss the existing as well as improved testing techniques for the better quality assurance purposes.",
"title": ""
},
{
"docid": "11806624e22ec2b72cd692755e8b2764",
"text": "The improvement of file access performance is a great challenge in real-time cloud services. In this paper, we analyze preconditions of dealing with this problem considering the aspects of requirements, hardware, software, and network environments in the cloud. Then we describe the design and implementation of a novel distributed layered cache system built on the top of the Hadoop Distributed File System which is named HDFS-based Distributed Cache System (HDCache). The cache system consists of a client library and multiple cache services. The cache services are designed with three access layers an in-memory cache, a snapshot of the local disk, and the actual disk view as provided by HDFS. The files loading from HDFS are cached in the shared memory which can be directly accessed by a client library. Multiple applications integrated with a client library can access a cache service simultaneously. Cache services are organized in the P2P style using a distributed hash table. Every file cached has three replicas in different cache service nodes in order to improve robustness and alleviates the workload. Experimental results show that the novel cache system can store files with a wide range in their sizes and has the access performance in a millisecond level in highly concurrent environments.",
"title": ""
},
{
"docid": "09baf9c55e7ae35bdcf88742ecdc01d5",
"text": "This paper presents the experimental evaluation of a Bluetooth-based positioning system. The method has been implemented in a Bluetooth-capable handheld device. Empirical tests of the developed considered positioning system have been realized in different indoor scenarios. The range estimation of the positioning system is based on an approximation of the relation between the RSSI (Radio Signal Strength Indicator) and the associated distance between sender and receiver. The actual location estimation is carried out by using the triangulation method. The implementation of the positioning system in a PDA (Personal Digital Assistant) has been realized by using the Software Microsoft eMbedded Visual C++ Version 3.0.",
"title": ""
},
{
"docid": "6c829f1d93b0b943065bafab433e61b9",
"text": "recognition by using the Mel-Scale Frequency Cepstral Coefficients (MFCC) extracted from speech signal of spoken words. Principal Component Analysis is employed as the supplement in feature dimensional reduction state, prior to training and testing speech samples via Maximum Likelihood Classifier (ML) and Support Vector Machine (SVM). Based on experimental database of total 40 times of speaking words collected under acoustically controlled room, the sixteen-ordered MFCC extracts have shown the improvement in recognition rates significantly when training the SVM with more MFCC samples by randomly selected from database, compared with the ML.",
"title": ""
},
{
"docid": "bfe62c8e438ff5ec697203295e658450",
"text": "Using the qualitative participatory action methodology, collective memory work, this study explored how transgender, queer, and questioning (TQQ) youth make meaning of their sexual orientation and gender identity through high school experiences. Researchers identified three major conceptual but overlapping themes from the data generated in the transgender, queer, and questioning youth focus group: a need for resilience, you should be able to be safe, and this is what action looks like! The researchers discuss how as a research product, a documentary can effectively \"capture voices\" of participants, making research accessible and attractive to parents, practitioners, policy makers, and participants.",
"title": ""
},
{
"docid": "cbaff0ba24a648e8228a7663e3d32e97",
"text": "Microservice architecture has started a new trend for application development/deployment in cloud due to its flexibility, scalability, manageability and performance. Various microservice platforms have emerged to facilitate the whole software engineering cycle for cloud applications from design, development, test, deployment to maintenance. In this paper, we propose a performance analytical model and validate it by experiments to study the provisioning performance of microservice platforms. We design and develop a microservice platform on Amazon EC2 cloud using Docker technology family to identify important elements contributing to the performance of microservice platforms. We leverage the results and insights from experiments to build a tractable analytical performance model that can be used to perform what-if analysis and capacity planning in a systematic manner for large scale microservices with minimum amount of time and cost.",
"title": ""
},
{
"docid": "241a1589619c2db686675327cab1e8da",
"text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.",
"title": ""
},
{
"docid": "8390fd7e559832eea895fabeb48c3549",
"text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear",
"title": ""
}
] |
scidocsrr
|
ed6bcbd5060a4b49d926a86cd3aa31b3
|
Failure mechanisms and closed reduction of a constrained tripolar acetabular liner.
|
[
{
"docid": "13642d5d73a58a1336790f74a3f0eac7",
"text": "Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.",
"title": ""
}
] |
[
{
"docid": "86aa04b01d2db65abd5ddd5d62b91645",
"text": "Asthma is a serious health problem throughout the world. During the past two decades, many scientific advances have improved our understanding of asthma and ability to manage and control it effectively. However, recommendations for asthma care need to be adapted to local conditions, resources and services. Since it was formed in 1993, the Global Initiative for Asthma, a network of individuals, organisations and public health officials, has played a leading role in disseminating information about the care of patients with asthma based on a process of continuous review of published scientific investigations. A comprehensive workshop report entitled \"A Global Strategy for Asthma Management and Prevention\", first published in 1995, has been widely adopted, translated and reproduced, and forms the basis for many national guidelines. The 2006 report contains important new themes. First, it asserts that \"it is reasonable to expect that in most patients with asthma, control of the disease can and should be achieved and maintained,\" and recommends a change in approach to asthma management, with asthma control, rather than asthma severity, being the focus of treatment decisions. The importance of the patient-care giver partnership and guided self-management, along with setting goals for treatment, are also emphasised.",
"title": ""
},
{
"docid": "51e78c504a3977ea7e706da7e3a06c25",
"text": "This work introduces an affordance characterization employing mechanical wrenches as a metric for predicting and planning with workspace affordances. Although affordances are a commonly used high-level paradigm for robotic task-level planning and learning, the literature has been sparse regarding how to characterize the agent in this object-agent-environment framework. In this work, we propose decomposing a behavior into a vocabulary of characteristic requirements and capabilities that are suitable to predict the affordances of various parts of the workspace. Specifically, we investigate mechanical wrenches as a viable representation of these affordance requirements and capabilities. We then use this vocabulary in a planning system to compose complex motions from simple behavior types in continuous space. The utility of the framework for complex planning is demonstrated on example scenarios both in simulation and with real-world industrial manipulators.",
"title": ""
},
{
"docid": "99b25b7187aa4e3ea85a6ce60173c7f8",
"text": "Modern advanced analytics applications make use of machine learning techniques and contain multiple steps of domain-specific and general-purpose processing with high resource requirements. We present KeystoneML, a system that captures and optimizes the end-to-end large-scale machine learning applications for high-throughput training in a distributed environment with a high-level API. This approach offers increased ease of use and higher performance over existing systems for large scale learning. We demonstrate the effectiveness of KeystoneML in achieving high quality statistical accuracy and scalable training using real world datasets in several domains.",
"title": ""
},
{
"docid": "fc9eae18a5a44ee7df22d6c7bdb5a164",
"text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "fea51b89ab1946dd7e4441009a4ea106",
"text": "The cost of acquiring, managing, and maintaining ICT infrastructure is one of the main factors that hinder educational institutions in Sub-Saharan countries to adopt and implement eLearning. Recently, cloud computing has emerged as a new computing paradigm for delivering cost effective computing services that can be used to harness eLearning. However, the adoption of cloud computing in higher education in Sub-Saharan countries is very low. Although there are many factors that may influence educational institutions to adopt cloud services, cost effectiveness is often a key factor. Far too little is known on how much the use of cloud computing can be cost effective in delivering eLearning services. This paper compares the cost of hosting eLearning services between on-premise and cloud-hosted approaches in higher education, taking Tanzania as a case study. The study found that institutions can significantly reduce the cost of eLearning implementation by adopting a cloud-hosted approach. The findings of this study serve as a base for educational institutions seeking cost effective alternatives to implement eLearning in developing countries.",
"title": ""
},
{
"docid": "21ac4a0a2fdd37d302997033fd85867c",
"text": "This paper discusses our initial work in developing metrics for software adaptability. In this paper we have developed several metrics for software adaptability. One of the advantages of the metrics that we have developed is that they are applicable at the architectural level. Since architecture development is the first stage of the design process, the extent to which the architecture is adaptable will determine the adaptability of the final software. Hence the metrics in this paper will help determine the extent to which the final software will be adaptable as well.",
"title": ""
},
{
"docid": "e59f3f8e0deea8b4caa32b54049ad76b",
"text": "We present AD, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs, based on the alternating directions method of multipliers. Like other dual decomposition algorithms, AD has a modular architecture, where local subproblems are solved independently, and their solutions are gathered to compute a global update. The key characteristic of AD is that each local subproblem has a quadratic regularizer, leading to faster convergence, both theoretically and in practice. We provide closed-form solutions for these AD subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD applicable to a wide range of problems. Experiments on synthetic and real-world problems show that AD compares favorably with the state-of-the-art.",
"title": ""
},
{
"docid": "7319ef7763ac2e79e946d29e7dba623a",
"text": "Computer system security is one of the most popular and the fastest evolving Information Technology (IT) areas. Protection of information access, availability and data integrity represents the basic security characteristics desired on information sources. Any disruption of these properties would result in system intrusion and the related security risk. Advanced decoy based technology called Honeypot has a huge potential for the security community and can achieve several goals of other security technologies, which makes it almost universal. Paper is devoted to sophisticated hybrid Honeypot with autonomous feature that allows to, based on the collected system parameters, adapt to the system of deployment. By its presence Honeypot attracts attacker by simulating vulnerabilities and poor security. After initiation of interaction Honeypot will record all attacker activities and after data analysis allows improving security in computer systems.",
"title": ""
},
{
"docid": "40ead715dc2c6f2bdf0920d0bf3a227d",
"text": "Abundant data link hypercholesterolaemia to atherogenesis. However, only recently have we appreciated that inflammatory mechanisms couple dyslipidaemia to atheroma formation. Leukocyte recruitment and expression of pro-inflammatory cytokines characterize early atherogenesis, and malfunction of inflammatory mediators mutes atheroma formation in mice. Moreover, inflammatory pathways promote thrombosis, a late and dreaded complication of atherosclerosis responsible for myocardial infarctions and most strokes. The new appreciation of the role of inflammation in atherosclerosis provides a mechanistic framework for understanding the clinical benefits of lipid-lowering therapies. Identifying the triggers for inflammation and unravelling the details of inflammatory pathways may eventually furnish new therapeutic targets.",
"title": ""
},
{
"docid": "9593712906aa8272716a7fe5b482b91d",
"text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.",
"title": ""
},
{
"docid": "9738485d5c61ac43e3a1e101b063dfd5",
"text": "Sentiment analysis is one of the most popular natural language processing techniques. It aims to identify the sentiment polarity (positive, negative, neutral or mixed) within a given text. The proper lexicon knowledge is very important for the lexicon-based sentiment analysis methods since they hinge on using the polarity of the lexical item to determine a text's sentiment polarity. However, it is quite common that some lexical items appear positive in the text of one domain but appear negative in another. In this paper, we propose an innovative knowledge building algorithm to extract sentiment lexicon knowledge through computing their polarity value based on their polarity distribution in text dataset, such as in a set of domain specific reviews. The proposed algorithm was tested by a set of domain microblogs. The results demonstrate the effectiveness of the proposed method. The proposed lexicon knowledge extraction method can enhance the performance of knowledge based sentiment analysis.",
"title": ""
},
{
"docid": "d60c51cf9ca05e5b1b176494572baaf3",
"text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group-only, group–node, group–link, and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.",
"title": ""
},
{
"docid": "782eaf93618c0e6b066519459bcdbdad",
"text": "A model based on strikingly different philosophical as. sumptions from those currently popular is proposed for the design of online subject catalog access. Three design principles are presented and discussed: uncertainty (subject indexing is indeterminate and probabilis-tic beyond a certain point), variety (by Ashby's law of requisite variety, variety of searcher query must equal variety of document indexing), and complexity (the search process, particularly during the entry and orientation phases, is subtler and more complex, on several grounds, than current models assume). Design features presented are an access phase, including entry and orientation , a hunting phase, and a selection phase. An end-user thesaurus and a front-end system mind are presented as examples of online catalog system components to improve searcher success during entry and orientation. The proposed model is \" wrapped around \" existing Library of Congress subject-heading indexing in such a way as to enhance access greatly without requiring reindexing. It is argued that both for cost reasons and in principle this is a superior approach to other design philosophies .",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "e29c44032fd3c6bbf1859c055e4a2bae",
"text": "BACKGROUND\nAutism and Williams syndrome (WS) are neuro-developmental disorders associated with distinct social phenotypes. While individuals with autism show a lack of interest in socially important cues, individuals with WS often show increased interest in socially relevant information.\n\n\nMETHODS\nThe current eye-tracking study explores how individuals with WS and autism preferentially attend to social scenes and movie extracts containing human actors and cartoon characters. The proportion of gaze time spent fixating on faces, bodies and the scene background was investigated.\n\n\nRESULTS\nWhile individuals with autism preferentially attended to characters' faces for less time than was typical, individuals with WS attended to the same regions for longer than typical. For individuals with autism atypical gaze behaviours extended across human actor and cartoon images or movies but for WS atypicalities were restricted to human actors.\n\n\nCONCLUSIONS\nThe reported gaze behaviours provide experimental evidence of the divergent social interests associated with autism and WS.",
"title": ""
},
{
"docid": "2a1d77e0c5fe71c3c5eab995828ef113",
"text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.",
"title": ""
},
{
"docid": "a66dd42b9d9b8912726e278e4f2da411",
"text": "A significant amount of marine debris has accumulated in the North Pacific Central Gyre (NPCG). The effects on larger marine organisms have been documented through cases of entanglement and ingestion; however, little is known about the effects on lower trophic level marine organisms. This study is the first to document ingestion and quantify the amount of plastic found in the gut of common planktivorous fish in the NPCG. From February 11 to 14, 2008, 11 neuston samples were collected by manta trawl in the NPCG. Plastic from each trawl and fish stomach was counted and weighed and categorized by type, size class and color. Approximately 35% of the fish studied had ingested plastic, averaging 2.1 pieces per fish. Additional studies are needed to determine the residence time of ingested plastics and their effects on fish health and the food chain implications.",
"title": ""
},
{
"docid": "3f8e6ebe83ba2d4bf3a1b4ab5044b6e4",
"text": "-This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the \"classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration. Irony: combination of circumstances, the result of which is the direct opposite of what might be expected. Paradox: seemingly absurd though perhaps really well-founded",
"title": ""
},
{
"docid": "5e360af9f3fa234afe9d2f71d04cc64c",
"text": "Personality is an important psychological construct accounting for individual differences in people. To reliably, validly, and efficiently recognize an individual’s personality is a worthwhile goal; however, the traditional ways of personality assessment through self-report inventories or interviews conducted by psychologists are costly and less practical in social media domains, since they need the subjects to take active actions to cooperate. This paper proposes a method of big five personality recognition (PR) from microblog in Chinese language environments with a new machine learning paradigm named label distribution learning (LDL), which has never been previously reported to be used in PR. One hundred and thirteen features are extracted from 994 active Sina Weibo users’ profiles and micro-blogs. Eight LDL algorithms and nine non-trivial conventional machine learning algorithms are adopted to train the big five personality traits prediction models. Experimental results show that two of the proposed LDL approaches outperform the others in predictive ability, and the most predictive one also achieves relatively higher running efficiency among all the algorithms.",
"title": ""
}
] |
scidocsrr
|
376deb0e020708dd1827198245fd0900
|
ACTIVITY RECOGNITION ON SMART DEVICES: Dealing with diversity in the wild
|
[
{
"docid": "046837c87b6d6c789cc060c1dfa273c0",
"text": "The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.",
"title": ""
}
] |
[
{
"docid": "22b008f0bafcbfd8fae82fb76b4a4568",
"text": "Mining sequential rules requires specifying parameters that are often difficult to set (the minimal confidence and minimal support). Depending on the choice of these parameters, current algorithms can become very slow and generate an extremely large amount of results or generate too few results, omitting valuable information. This is a serious problem because in practice users have limited resources for analyzing the results and thus are often only interested in discovering a certain amount of results, and fine-tuning the parameters can be very time-consuming. In this paper, we address this problem by proposing TopSeqRules, an efficient algorithm for mining the top-k sequential rules from sequence databases, where k is the number of sequential rules to be found and is set by the user. Experimental results on real-life datasets show that the algorithm has excellent performance and scalability.",
"title": ""
},
{
"docid": "8db733045dd0689e21f35035f4545eff",
"text": "An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.",
"title": ""
},
{
"docid": "809b5194b8f842a6e3f7e5b8748fefc3",
"text": "Failure modes and mechanisms of AlGaN/GaN high-electron-mobility transistors are reviewed. Data from three de-accelerated tests are presented, which demonstrate a close correlation between failure modes and bias point. Maximum degradation was found in \"semi-on\" conditions, close to the maximum of hot-electron generation which was detected with the aid of electroluminescence (EL) measurements. This suggests a contribution of hot-electron effects to device degradation, at least at moderate drain bias (VDS<30 V). A procedure for the characterization of hot carrier phenomena based on EL microscopy and spectroscopy is described. At high drain bias (VDS>30-50 V), new failure mechanisms are triggered, which induce an increase of gate leakage current. The latter is possibly related with the inverse piezoelectric effect leading to defect generation due to strain relaxation, and/or to localized permanent breakdown of the AlGaN barrier layer. Results are compared with literature data throughout the text.",
"title": ""
},
{
"docid": "0b40ed36adf91476da945ca9becc0c40",
"text": "The popularity of social-networking sites, blogging and other content-sharing sites has exploded, resulting in more personal information and opinions being available with less access control than ever before [5]. Many content-sharing sites provide only the most rudimentary access control: a document can be either completely private or completely public. Other sites offer the slightly more flexible private/friends/public access-control model, but this still fails to support natural distinctions users need, such as separating real-world friends from online friends. The traditional response to these privacy concerns is to post anonymously or pseudonymously, but recent psychological research shows that some Internet users do not establish separate, online personae, but instead consider their online identity as an extension of their real-life self [3]. And although privacy expectations that users desire are easy to state, there is a large gap between the users’ mental models and the policy languages of traditional access-control systems [2]. The consequences of poor access control are welldocumented in the news media. Bloggers have lost their jobs when their employer discovered the employee’s personal blog [9]. Sexual predators use social-networking sites to find victims [7]. Bloggers have been stalked based on the opinions and personal information placed on their blog [8]. Universities have disciplined students using photographs published on social-networking sites [1]. For all these reasons, we advocate that blogs and social networks need a policy mechanism that supports high-level policies that can be expressed succinctly, applied automatically, and updated easily. Current access-control systems fail to meet these goals. Users manually enforce and manage their policies, users, groups, and roles of the system. Furthermore, these systems lack intuitive tools and interfaces for policy generation. We propose to solve all these problems by specifying access-control policies in terms of the content being mediated, e.g. “Blog posts about my home-town are visible to my high school friends.” The system will then automatically infer the posts that are subject to policy rules based on the posts’ contents. Similarly, the system can infer relationships and interests of the users based on the content of objects they create (see Section 3). Such policies will be intuitive and easy to specify, greatly enhancing usability for non-technical users. We first discuss the current state of access control on content-driven sites and analyze approaches proposed in literature for implementing access control for the web. We then describe our proposed method of access control for content-sharing sites.",
"title": ""
},
{
"docid": "36d0358eb3c668817fa33e13e197678a",
"text": "Please note that gray areas reflect artwork that has been intentionally removed. The substantive content of the article appears as originally published.",
"title": ""
},
{
"docid": "d1741f908ea854331c8c40f2d3334882",
"text": "We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN. We then compare the generated samples, exact log-probability densities and approximate Wasserstein distances. We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting. Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.",
"title": ""
},
{
"docid": "eb0672f019c82dfe0614b39d3e89be2e",
"text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses",
"title": ""
},
{
"docid": "8d6ebefca528255bc14561e1106522af",
"text": "Constant power loads may yield instability due to the well-known negative impedance characteristic. This paper analyzes the factors that cause instability of a dc microgrid with multiple dc–dc converters. Two stabilization methods are presented for two operation modes: 1) constant voltage source mode; and 2) droop mode, and sufficient conditions for the stability of the dc microgrid are obtained by identifying the eigenvalues of the Jacobian matrix. The key is to transform the eigenvalue problem to a quadratic eigenvalue problem. When applying the methods in practical engineering, the salient feature is that the stability parameter domains can be estimated by the available constraints, such as the values of capacities, inductances, maximum load power, and distances of the cables. Compared with some classical methods, the proposed methods have wider stability region. The simulation results based on MATLAB/simulink platform verify the feasibility of the methods.",
"title": ""
},
{
"docid": "eff45b92173acbc2f6462c3802d19c39",
"text": "There are shortcomings in traditional theorizing about effective ways of coping with bereavement, most notably, with respect to the so-called \"grief work hypothesis.\" Criticisms include imprecise definition, failure to represent dynamic processing that is characteristic of grieving, lack of empirical evidence and validation across cultures and historical periods, and a limited focus on intrapersonal processes and on health outcomes. Therefore, a revised model of coping with bereavement, the dual process model, is proposed. This model identifies two types of stressors, loss- and restoration-oriented, and a dynamic, regulatory coping process of oscillation, whereby the grieving individual at times confronts, at other times avoids, the different tasks of grieving. This model proposes that adaptive coping is composed of confrontation--avoidance of loss and restoration stressors. It also argues the need for dosage of grieving, that is, the need to take respite from dealing with either of these stressors, as an integral part of adaptive coping. Empirical research to support this conceptualization is discussed, and the model's relevance to the examination of complicated grief, analysis of subgroup phenomena, as well as interpersonal coping processes, is described.",
"title": ""
},
{
"docid": "a3fdbc08bd9b73474319f9bc5c510f85",
"text": "With the rapid increase of mobile devices, the computing load of roadside cloudlets is fast growing. When the computation tasks of the roadside cloudlet reach the limit, the overload may generate heat radiation problem and unacceptable delay to mobile users. In this paper, we leverage the characteristics of buses and propose a scalable fog computing paradigm with servicing offloading in bus networks. The bus fog servers not only provide fog computing services for the mobile users on bus, but also are motivated to accomplish the computation tasks offloaded by roadside cloudlets. By this way, the computing capability of roadside cloudlets is significantly extended. We consider an allocation strategy using genetic algorithm (GA). With this strategy, the roadside cloudlets spend the least cost to offload their computation tasks. Meanwhile, the user experience of mobile users are maintained. The simulations validate the advantage of the propose scheme.",
"title": ""
},
{
"docid": "54657c37ded0d3bd55eee298866e4154",
"text": "The Genia Event Extraction task is organized for the third time, in BioNLP Shared Task 2013. Toward knowledge based construction, the task is modified in a number of points. As the final results, it received 12 submissions, among which 2 were withdrawn from the final report. This paper presents the task setting, data sets, and the final results with discussion for possible future directions.",
"title": ""
},
{
"docid": "2b91abf4b2a12c852fc78eb40b0b22ba",
"text": "Interdisciplinary research broadens the view of particular problems yielding fresh and possibly unexpected insights. This is the case of neuromorphic engineering where technology and neuroscience cross-fertilize each other. For example, consider on one side the recently discovered memristor, postulated in 1971, thanks to research in nanotechnology electronics. On the other side, consider the mechanism known as Spike-TimeDependent-Plasticity (STDP) which describes a neuronal synaptic learning mechanism that outperforms the traditional Hebbian synaptic plasticity proposed in 1949. STDP was originally postulated as a computer learning algorithm, and is being used by the machine intelligence and computational neuroscience community. At the same time its biological and physiological foundations have been reasonably well established during the past decade. If memristance and STDP can be related, then (a) recent discoveries in nanophysics and nanoelectronic principles may shed new lights into understanding the intricate molecular and physiological mechanisms behind STDP in neuroscience, and (b) new neuromorphic-like computers built out of nanotechnology memristive devices could incorporate the biological STDP mechanisms yielding a new generation of self-adaptive ultrahigh-dense intelligent machines. Here we show that by combining memristance models with the electrical wave signals of neural impulses (spikes) converging from preand post-synaptic neurons into a synaptic junction, STDP behavior emerges naturally. This result serves to understand how neural and memristance parameters modulate STDP, which might bring new insights to neurophysiologists in searching for the ultimate physiological mechanisms responsible for STDP in biological synapses. At the same time, this result also provides a direct mean to incorporate STDP learning mechanisms into a new generation of nanotechnology computers employing memristors. Memristance was postulated in 1971 by Chua based on circuit theoretical reasonings and has been recently demonstrated in nanoscale two-terminal devices, such as certain titanium-dioxide and amorphous Silicon cross-point switches. Memristance arises naturally in nanoscale devices because small voltages can yield enormous electric fields that produce the motion of charged atomic or molecular species changing structural properties of a device (such as its conductance) while it operates. By definition a memristor obeys equations of the form",
"title": ""
},
{
"docid": "651e4362136a5700a9beaa7242dae654",
"text": "This thesis makes several contributions to the field of data compression. Lossless data compression algorithms shorten the description of input objects, such as sequences of text, in a way that allows perfect recovery of the original object. Such algorithms exploit the fact that input objects are not uniformly distributed: by allocating shorter descriptions to more probable objects and longer descriptions to less probable objects, the expected length of the compressed output can be made shorter than the object’s original description. Compression algorithms can be designed to match almost any given probability distribution over input objects. This thesis employs probabilistic modelling, Bayesian inference, and arithmetic coding to derive compression algorithms for a variety of applications, making the underlying probability distributions explicit throughout. A general compression toolbox is described, consisting of practical algorithms for compressing data distributed by various fundamental probability distributions, and mechanisms for combining these algorithms in a principled way. Building on the compression toolbox, new mathematical theory is introduced for compressing objects with an underlying combinatorial structure, such as permutations, combinations, and multisets. An example application is given that compresses unordered collections of strings, even if the strings in the collection are individually incompressible. For text compression, a novel unifying construction is developed for a family of contextsensitive compression algorithms. Special cases of this family include the PPM algorithm and the Sequence Memoizer, an unbounded depth hierarchical Pitman–Yor process model. It is shown how these algorithms are related, what their probabilistic models are, and how they produce fundamentally similar results. The work concludes with experimental results, example applications, and a brief discussion on cost-sensitive compression and adversarial sequences.",
"title": ""
},
{
"docid": "6f0f034cc0add413fca2f08c229cff09",
"text": "This paper describes the main features of the Italian Metaphor Database, buing built at the University of Perugia (Italy). The database is being developed as a resource to be used both as a knowledge base on conceptual metaphors in Italian and their lexical expressions, and to enrich general lexical resources. The reason to develop such a database is that most NLP systems have to deal with metaphorical expressions sooner or later but, as previous research has shown, existing lexical resources for Italian do not contain complete and consistent data on metaphors, empirically derived but theoretically motivated. Thus, by referring to the Cognitive Theory of metaphor, conceptual metaphors instantiated in Italian are being represented in the resource, together with data on the way they are expressed in the language (i.e., through lexical units or multiword expressions), examples of them found within a corpus, and data on metaphorical linguistic expressions encoded/missing within ItalWordNet.",
"title": ""
},
{
"docid": "c9e2d6922436a70e4ab0f7d4f3133f55",
"text": "The inverse kinematics problem of robot manipulators is solved analytically in order to have complete and simple solutions to the problem. This approach is also called as a closed form solution of robot inverse kinematics problem. In this paper, the inverse kinematics of sixteen industrial robot manipulators classified by Huang and Milenkovic were solved in closed form. Each robot manipulator has an Euler wrist whose three axes intersect at a common point. Basically, five trigonometric equations were used to solve the inverse kinematics problems. Robot manipulators can be mainly divided into four different group based on the joint structure. In this work, the inverse kinematics solutions of SN (cylindrical robot with dome), CS (cylindrical robot), NR (articulated robot) and CC (selectively compliant assembly robot arm-SCARA, Type 2) robot manipulator belonging to each group mentioned above are given as an example. The number of the inverse kinematics solutions for the other robot manipulator was also summarized in a table.",
"title": ""
},
{
"docid": "7d687eb0a853c2faed5d4109f3cdb023",
"text": "This paper presents a new method for vehicle logo detection and recognition from images of front and back views of vehicle. The proposed method is a two-stage scheme which combines Convolutional Neural Network (CNN) and Pyramid of Histogram of Gradient (PHOG) features. CNN is applied as the first stage for candidate region detection and recognition of the vehicle logos. Then, PHOG with Support Vector Machine (SVM) classifier is employed in the second stage to verify the results from the first stage. Experiments are performed with dataset of vehicle images collected from internet. The results show that the proposed method can accurately locate and recognize the vehicle logos with higher robustness in comparison with the other conventional schemes. The proposed methods can provide up to 100% in recall, 96.96% in precision and 99.99% in recognition rate in dataset of 20 classes of the vehicle logo.",
"title": ""
},
{
"docid": "c515c780d32f051f75de8a06aedc7d1a",
"text": "Science and technologies based on terahertz frequency electromagnetic radiation (100 GHz–30 THz) have developed rapidly over the last 30 years. For most of the 20th Century, terahertz radiation, then referred to as sub-millimeter wave or far-infrared radiation, was mainly utilized by astronomers and some spectroscopists. Following the development of laser based terahertz time-domain spectroscopy in the 1980s and 1990s the field of THz science and technology expanded rapidly, to the extent that it now touches many areas from fundamental science to ‘real world’ applications. For example THz radiation is being used to optimize materials for new solar cells, and may also be a key technology for the next generation of airport security scanners. While the field was emerging it was possible to keep track of all new developments, however now the field has grown so much that it is increasingly difficult to follow the diverse range of new discoveries and applications that are appearing. At this point in time, when the field of THz science and technology is moving from an emerging to a more established and interdisciplinary field, it is apt to present a roadmap to help identify the breadth and future directions of the field. The aim of this roadmap is to present a snapshot of the present state of THz science and technology in 2017, and provide an opinion on the challenges and opportunities that the future holds. To be able to achieve this aim, we have invited a group of international experts to write 18 sections that cover most of the key areas of THz science and technology. We hope that The 2017 Roadmap on THz science and technology will prove to be a useful resource by providing a wide ranging introduction to the capabilities of THz radiation for those outside or just entering the field as well as providing perspective and breadth for those who are well established. We also feel that this review should serve as a useful guide for government and funding agencies.",
"title": ""
},
{
"docid": "3e8fc64f5e9983a6c0092fa22be0bca9",
"text": "We describe ANGELINA3, a system that can automatically develop games along a defined theme, by selecting appropriate multimedia content from a variety of sources and incorporating it into a game’s design. We discuss these capabilities in the context of the FACE model for assessing progress in the building of creative systems, and discuss how ANGELINA3 can be improved through further work. The design of videogames is both a technical and an aesthetic task, and a holistic approach is necessary when constructing systems which aim to automate the process. Systems previously demonstrated as automated game designers have been shown to tackle, in a basic way, many of the technical tasks associated with game design including level creation and ruleset design, for both simple arcade-style games (Cook and Colton 2011a) and platform games (Cook and Colton 2012). However, in such systems the art, sound and theme are chosen by a human. This weakens the claim that these systems automate the process of game design. Today, people play videogames for many reasons beyond simply the challenge they offer. Dan Pinchbeck’s experiment in narrative technique Dear Esther1 enjoyed 50,000 sales in its first week2, while Jenova Chen’s Flower3 has been used in a church in the UK as part of a service of worship, with one attendee describing the game as ‘spiritual’4. Automating the design of games that carry emotional weight or attempt to convey a complex meaning is a compelling research problem that lies at the intersection of game design theory and Computational Creativity, and is almost entirely unexplored. ANGELINA, A Novel Game-Evolving Labrat I’ve Named ANGELINA, is a system for investigating the automation of simple videogame design. We describe here a first step for the latest version of the software, ANGELINA3, towards producing a system that not only takes on the technical task of game and level design, but also independently selects and arranges visual and aural media as part of the deCopyright c 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. The Chinese Room, 2012 Dear Esther surpasses 50,000 sales http://bit.ly/esthsale http://thatgamecompany.com/games/flower, 2012 Cathedral uses game in church service http://bit.ly/flowcat sign process, to achieve a creative and artistic goal in the finished game. Our long term goal is to develop a fully automated creative videogame design system. This paper reports our progress towards this goal, in which we describe the third iteration of the ANGELINA3 system and employ the FACE model (Colton, Charnley, and Pease 2011) of evaluation from Computational Creativity to argue that ANGELINA3 is more creative than an earlier version of the software. We make the following contributions: 1. We describe an automated videogame design system, ANGELINA3, which is able to generate conceptual information gleaned from news articles, form aesthetic evaluations of a particular concept, invent example videogames which express these concepts, and generate its own framing information about its products and processes. 2. We demonstrate the use of evaluation criteria from Computational Creativity to game design systems, and use it to argue that our system has progressed in terms of creativity since a previously described version of the software. The remainder of this paper is organised as follows: in the section titled Background we describe the structure of ANGELINA2 and extensions made in ANGELINA3; we then describe the modules that provide the system’s creative abilities; in the Example Games section we give examples of games produced by the system; we then evaluate ANGELINA3 as the system currently stands; in Related Work we outline some existing work in the area and its relation to ANGELINA3; finally we discuss future directions for the project to improve ANGELINA3’s creative abilities and independence as a designer.",
"title": ""
},
{
"docid": "0f42fa1c48b74963cd72f19b508d8b98",
"text": "We present an optimally modified log-spectral amplitude estimator, which minimizes the mean-square error of the log-spectra for speech signals under signal presence uncertainty. We propose an estimator for the a priori signal-to-noise ratio (SNR), and introduce an efficient estimator for the a priori speech absence probability. Speech presence probability is estimated for each frequency bin and each frame by a soft-decision approach, which exploits the strong correlation of speech presence in neighboring frequency bins of consecutive frames. Objective and subjective evaluation confirm superiority in noise suppression and quality of the enhanced speech.",
"title": ""
}
] |
scidocsrr
|
3770571c1c2367eb8dfd087594ff127a
|
An exact algorithm for team orienteering problems
|
[
{
"docid": "47bfe9238083f0948c16d7beeac75155",
"text": "In this paper, we propose a solution procedure for the Elementary Shortest Path Problem with Resource Constraints (ESPPRC). A relaxed version of this problem in which the path does not have to be elementary has been the backbone of a number of solution procedures based on column generation for several important problems, such as vehicle routing and crew-pairing. In many cases relaxing the restriction of an elementary path resulted in optimal solutions in a reasonable computation time. However, for a number of other problems, the elementary path restriction has too much impact on the solution to be relaxed or might even be necessary. We propose an exact solution procedure for the ESPPRC which extends the classical label correcting algorithm originally developed for the relaxed (non-elementary) path version of this problem. We present computational experiments of this algorithm for our specific problem and embedded in a column generation scheme for the classical Vehicle Routing Problem with Time Windows.",
"title": ""
}
] |
[
{
"docid": "df2070a04f13c444e9aa466eaa3d45eb",
"text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.08.023 ⇑ Address: Islamic Azad University, Khoy Branch, E-mail addresses: hatamlou@iaukhoy.ac.ir, hatam Nature has always been a source of inspiration. Over the last few decades, it has stimulated many successful algorithms and computational tools for dealing with complex and optimization problems. This paper proposes a new heuristic algorithm that is inspired by the black hole phenomenon. Similar to other population-based algorithms, the black hole algorithm (BH) starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. At each iteration of the black hole algorithm, the best candidate is selected to be the black hole, which then starts pulling other candidates around it, called stars. If a star gets too close to the black hole, it will be swallowed by the black hole and is gone forever. In such a case, a new star (candidate solution) is randomly generated and placed in the search space and starts a new search. To evaluate the performance of the black hole algorithm, it is applied to solve the clustering problem, which is a NP-hard problem. The experimental results show that the proposed black hole algorithm outperforms other traditional heuristic algorithms for several benchmark datasets. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "25eee8be0a4e4e5dd29fe31ccc902b77",
"text": "3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.",
"title": ""
},
{
"docid": "bda892eb6cdcc818284f56b74c932072",
"text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.",
"title": ""
},
{
"docid": "635ef4eb79aeea85f58676334c16be71",
"text": "We propose a deep learning framework for modeling complex high-dimensional densities via Nonlinear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable, and unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.",
"title": ""
},
{
"docid": "55a37995369fe4f8ddb446d83ac0cecf",
"text": "With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.",
"title": ""
},
{
"docid": "8726e80818f0619f5157ad2295dee7df",
"text": "The OptaSense® Distributed Acoustic Sensing (DAS) system is an acoustic and seismic sensing capability that uses simple fibre optic communications cables as the sensor. Using existing or new cables, it can provide low-cost and high-reliability surface crossing and tunnel construction detection, with power and communications services needed only every 80-100 km. The technology has been proven in worldwide security operations at over one hundred locations in a variety of industries including oil and gas pipelines, railways, and high-value facility perimeters - a total of 100,000,000 kilometre-hours of linear asset protection. The system reliably detects a variety of border threats with very few nuisance alarms. It can work in concert with existing border surveillance technologies to provide security personnel a new value proposition for fighting trans-border crime. Its ability to detect, classify and locate activity over hundreds of kilometres and provide information in an accurate and actionable way has proven OptaSense to be a cost-effective solution for monitoring long borders. It has been scaled to cover 1500 km controlled by a single central monitoring station in pipeline applications.",
"title": ""
},
{
"docid": "931c75847fdfec787ad6a31a6568d9e3",
"text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.",
"title": ""
},
{
"docid": "5a9209f792ddd738d44f17b1175afe64",
"text": "PURPOSE\nIncrease in muscle force, endurance, and flexibility is desired in elite athletes to improve performance and to avoid injuries, but it is often hindered by the occurrence of myofascial trigger points. Dry needling (DN) has been shown effective in eliminating myofascial trigger points.\n\n\nMETHODS\nThis randomized controlled study in 30 elite youth soccer players of a professional soccer Bundesliga Club investigated the effects of four weekly sessions of DN plus water pressure massage on thigh muscle force and range of motion of hip flexion. A group receiving placebo laser plus water pressure massage and a group with no intervention served as controls. Data were collected at baseline (M1), treatment end (M2), and 4 wk follow-up (M3). Furthermore, a 5-month muscle injury follow-up was performed.\n\n\nRESULTS\nDN showed significant improvement of muscular endurance of knee extensors at M2 (P = 0.039) and M3 (P = 0.008) compared with M1 (M1:294.6 ± 15.4 N·m·s, M2:311 ± 25 N·m·s; M3:316.0 ± 28.6 N·m·s) and knee flexors at M2 compared with M1 (M1:163.5 ± 10.9 N·m·s, M2:188.5 ± 16.3 N·m·s) as well as hip flexion (M1: 81.5° ± 3.3°, M2:89.8° ± 2.8°; M3:91.8° ± 3.8°). Compared with placebo (3.8° ± 3.8°) and control (1.4° ± 2.9°), DN (10.3° ± 3.5°) showed a significant (P = 0.01 and P = 0.0002) effect at M3 compared with M1 on hip flexion; compared with nontreatment control (-10 ± 11.9 N·m), DN (5.2 ± 10.2 N·m) also significantly (P = 0.049) improved maximum force of knee extensors at M3 compared with M1. During the rest of the season, muscle injuries were less frequent in the DN group compared with the control group.\n\n\nCONCLUSION\nDN showed a significant effect on muscular endurance and hip flexion range of motion that persisted 4 wk posttreatment. Compared with placebo, it showed a significant effect on hip flexion that persisted 4 wk posttreatment, and compared with nonintervention control, it showed a significant effect on maximum force of knee extensors 4 wk posttreatment in elite soccer players.",
"title": ""
},
{
"docid": "0eb98d2e5d7e3c46e1ae830c73008fd4",
"text": "Twitter, the most famous micro-blogging service and online social network, collects millions of tweets every day. Due to the length limitation, users usually need to explore other ways to enrich the content of their tweets. Some studies have provided findings to suggest that users can benefit from added hyperlinks in tweets. In this paper, we focus on the hyperlinks in Twitter and propose a new application, called hyperlink recommendation in Twitter. We expect that the recommended hyperlinks can be used to enrich the information of user tweets. A three-way tensor is used to model the user-tweet-hyperlink collaborative relations. Two tensor-based clustering approaches, tensor decomposition-based clustering (TDC) and tensor approximation-based clustering (TAC) are developed to group the users, tweets and hyperlinks with similar interests, or similar contexts. Recommendation is then made based on the reconstructed tensor using cluster information. The evaluation results in terms of Mean Absolute Error (MAE) shows the advantages of both the TDC and TAC approaches over a baseline recommendation approach, i.e., memory-based collaborative filtering. Comparatively, the TAC approach achieves better performance than the TDC approach.",
"title": ""
},
{
"docid": "f0505768d42cd9da66520ae380447ab3",
"text": "This article demonstrates that convolutional operation can be converted to matrix multiplication, which has the same calculation way with fully connected layer. The article is helpful for the beginners of the neural network to understand how fully connected layer and the convolutional layer work in the backend. To be concise and to make the article more readable, we only consider the linear case. It can be extended to the non-linear case easily through plugging in a non-linear encapsulation to the values like this σ(x) denoted as x′.",
"title": ""
},
{
"docid": "342bcd2509b632480c4f4e8059cfa6a1",
"text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.",
"title": ""
},
{
"docid": "68b7c94a2efb0fefd6ad3d74a08edf87",
"text": "Innovations like domain-specific hardware, enhanced security, open instruction sets, and agile chip development will lead the way.",
"title": ""
},
{
"docid": "07fc203735e9da22e0dc49c4a1153db0",
"text": "The implementation, diffusion and adoption of e-government in the public sector has been a topic that has been debated by the research community for some time. In particular, the limited adoption of e-government services is attributed to factors such as the heterogeneity of users, lack of user-orientation, the limited transformation of public sector and the mismatch between expectations and supply. In this editorial, we review theories and factors impacting implementation, diffusion and adoption of e-government. Most theories used in prior research follow mainstream information systems concepts, which can be criticized for not taking into account e-government specific characteristics. The authors argue that there is a need for e-government specific theories and methodologies that address the idiosyncratic nature of e-government as the well-known information systems concepts that are primarily developed for business contexts are not equipped to encapsulate the complexities surrounding e-government. Aspects like accountability, digital divide, legislation, public governance, institutional complexity and citizens' needs are challenging issues that have to be taken into account in e-government theory and practices. As such, in this editorial we argue that e-government should develop as an own strand of research, while information systems theories and concepts should not be neglected.",
"title": ""
},
{
"docid": "ef92244350e267d3b5b9251d496e0ee2",
"text": "A review of recent advances in power wafer level electronic packaging is presented based on the development of power device integration. The paper covers in more detail how advances in both semiconductor content and power advanced wafer level package design and materials have co-enabled significant advances in power device capability during recent years. Extrapolating the same trends in representative areas for the remainder of the decade serves to highlight where further improvement in materials and techniques can drive continued enhancements in usability, efficiency, reliability and overall cost of power semiconductor solutions. Along with next generation wafer level power packaging development, the role of modeling is a key to assure successful package design. An overview of the power package modeling is presented. Challenges of wafer level power semiconductor packaging and modeling in both next generation design and assembly processes are presented and discussed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fac9465df30dd5d9ba5bc415b2be8172",
"text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.",
"title": ""
},
{
"docid": "f10996698f2596de3ca7436a82e8c326",
"text": "Hybrid multiple-antenna transceivers, which combine large-dimensional analog pre/postprocessing with lower-dimensional digital processing, are the most promising approach for reducing the hardware cost and training overhead in massive MIMO systems. This article provides a comprehensive survey of the various incarnations of such structures that have been proposed in the literature. We provide a taxonomy in terms of the required channel state information, that is, whether the processing adapts to the instantaneous or average (second-order) channel state information; while the former provides somewhat better signal- to-noise and interference ratio, the latter has much lower overhead for CSI acquisition. We furthermore distinguish hardware structures of different complexities. Finally, we point out the special design aspects for operation at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "a6bc752bd6a4fc070fa01a5322fb30a1",
"text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classi cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classi cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.",
"title": ""
},
{
"docid": "4d9312d22dcc37933d0108fbfacd1c38",
"text": "This study focuses on the use of different types of shear reinforcement in the reinforced concrete beams. Four different types of shear reinforcement are investigated; traditional stirrups, welded swimmer bars, bolted swimmer bars, and u-link bolted swimmer bars. Beam shear strength as well as beam deflection are the main two factors considered in this study. Shear failure in reinforced concrete beams is one of the most undesirable modes of failure due to its rapid progression. This sudden type of failure made it necessary to explore more effective ways to design these beams for shear. The reinforced concrete beams show different behavior at the failure stage in shear compare to the bending, which is considered to be unsafe mode of failure. The diagonal cracks that develop due to excess shear forces are considerably wider than the flexural cracks. The cost and safety of shear reinforcement in reinforced concrete beams led to the study of other alternatives. Swimmer bar system is a new type of shear reinforcement. It is a small inclined bars, with its both ends bent horizontally for a short distance and welded or bolted to both top and bottom flexural steel reinforcement. Regardless of the number of swimmer bars used in each inclined plane, the swimmer bars form plane-crack interceptor system instead of bar-crack interceptor system when stirrups are used. Several reinforced concrete beams were carefully prepared and tested in the lab. The results of these tests will be presented and discussed. The deflection of each beam is also measured at incrementally increased applied load.",
"title": ""
},
{
"docid": "034f6044eda34a00c64db60fb4144eb6",
"text": "Motivation\nDiffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model.\n\n\nResults\nWe first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training.\n\n\nAvailability and Implementation\nThe MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank .\n\n\nContact\ngribskov@purdue.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
}
] |
scidocsrr
|
71e92fac5500ae6f83cd1b7e18112be6
|
Design of a Cascoded Operational Amplifier with High Gain
|
[
{
"docid": "1884e92beb10bb653af5b8efa967e92d",
"text": "Presents an overview of current design techniques for operational amplifiers implemented in CMOS and NMOS technology at a tutorial level. Primary emphasis is placed on CMOS amplifiers because of their more widespread use. Factors affecting voltage gain, input noise, offsets, common mode and power supply rejection, power dissipation, and transient response are considered for the traditional bipolar-derived two-stage architecture. Alternative circuit approaches for optimization of particular performance aspects are summarized, and examples are given.",
"title": ""
}
] |
[
{
"docid": "c6741791b8685beb3eee1c721dcc255b",
"text": "In on-line search and display advertising, the click-trough rate (CTR) has been traditionally a key measure of ad/campaign effectiveness. More recently, the market has gained interest in more direct measures of profitability, one early alternative is the conversion rate (CVR). CVRs measure the proportion of certain users who take a predefined, desirable action, such as a purchase, registration, download, etc.; as compared to simply page browsing. We provide a detailed analysis of conversion rates in the context of non-guaranteed delivery targeted advertising. In particular we focus on the post-click conversion (PCC) problem or the analysis of conversions after a user click on a referring ad. The key elements we study are the probability of a conversion given a click in a user/page context, P(conversion | click, context). We provide various fundamental properties of this process based on contextual information, formalize the problem of predicting PCC, and propose an approach for measuring attribute relevance when the underlying attribute distribution is non-stationary. We provide experimental analyses based on logged events from a large-scale advertising platform.",
"title": ""
},
{
"docid": "52cb98b269597ca840b74215116f4e45",
"text": "The ubiquity of mobile devices has drawn new attention to the field of electronic government. Literature studies report on the significance of m-government, including its motivation, success, and failure in developed and developing countries. However, research on the design of m-government applications is still scarce. Design approaches in the literature lack a comprehensive way of addressing m-government challenges. This paper aims to (1) identify challenges of m-government in developed and developing countries and (2) investigate approaches used for designing m-government applications. The challenges are categorised based on the factors of PESTELMO and are further examined to identify requirements for suitable m-government design. Design approaches are analysed by the Content, Context and Process (CCP) framework and are examined to identify requirements, methods and guidelines addressed. The paper finally outlines research needs for a comprehensive design framework for m-government solutions and presents initial requirements for the framework.",
"title": ""
},
{
"docid": "2e088ce4f7e5b3633fa904eab7563875",
"text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.",
"title": ""
},
{
"docid": "3298ecc4169ceb0bc6352b3689f65642",
"text": "The need to disinfect a patient's skin before subcutaneous or intramuscular injection is a much debated practice. Guidance on this issue varies between NHS organisations that provide primary and secondary care. However, with patients being increasingly concerned with healthcare-associated infections, a general consensus needs to be reached whereby this practice is either rejected or made mandatory.",
"title": ""
},
{
"docid": "d42bdb401ccdd416808bb91e5025f379",
"text": "Blockchain technology has evolved from being an immutable ledger of transactions for cryptocurrencies to a programmable interactive environment for building distributed reliable applications. Although, blockchain technology has been used to address various challenges, to our knowledge none of the previous work focused on using blockchain to develop a secure and immutable scientific data provenance management framework that automatically verifies the provenance records. In this work, we leverage blockchain as a platform to facilitate trustworthy data provenance collection, verification and management. The developed system utilizes smart contracts and open provenance model (OPM) to record immutable data trails. We show that our proposed framework can efficiently and securely capture and validate provenance data, and prevent any malicious modification to the captured data as long as majority of the participants are honest.",
"title": ""
},
{
"docid": "0801ef431c6e4dab6158029262a3bf82",
"text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.",
"title": ""
},
{
"docid": "423d8264602c19c313c044fcf08c0717",
"text": "Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity.",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "7a18b4e266cb353e523addfacbdf5bdf",
"text": "The field of image composition is constantly trying to improve the ways in which an image can be altered and enhanced. While this is usually done in the name of aesthetics and practicality, it also provides tools that can be used to maliciously alter images. In this sense, the field of digital image forensics has to be prepared to deal with the influx of new technology, in a constant arms-race. In this paper, the current state of this armsrace is analyzed, surveying the state-of-the-art and providing means to compare both sides. A novel scale to classify image forensics assessments is proposed, and experiments are performed to test composition techniques in regards to different forensics traces. We show that even though research in forensics seems unaware of the advanced forms of image composition, it possesses the basic tools to detect it.",
"title": ""
},
{
"docid": "68a1c316c50258f924d28f1a2906271c",
"text": "Market segmentation is one of the most important area of knowledge-based marketing. In banks, it is really a challenging task as data bases are large and multidimensional. In the paper we consider cluster analysis, which is the methodology, the most often applied in this area. We compare clustering algorithms in cases of high dimensionality with noise. We discuss using three algorithms: density based DBSCAN, k-means and based on it two-phase clustering process. We compare algorithms concerning their effectiveness and scalability. Some experiments with exemplary bank data sets are presented.",
"title": ""
},
{
"docid": "b04ae75e4f444b97976962a397ac413c",
"text": "In this paper the new topology DC/DC Boost power converter-inverter-DC motor that allows bidirectional rotation of the motor shaft is presented. In this direction, the system mathematical model is developed considering its different operation modes. Afterwards, the model validation is performed via numerical simulations by using Matlab-Simulink.",
"title": ""
},
{
"docid": "8f304c738458fa2ccae77b3f222b45ab",
"text": "A vehicular ad hoc network (VANET) serves as an application of the intelligent transportation system that improves traffic safety as well as efficiency. Vehicles in a VANET broadcast traffic and safety-related information used by road safety applications, such as an emergency electronic brake light. The broadcast of these messages in an open-access environment makes security and privacy critical and challenging issues in the VANET. A misuse of this information may lead to a traffic accident and loss of human lives atworse and, therefore, vehicle authentication is a necessary requirement. During authentication, a vehicle’s privacy-related data, such as identity and location information, must be kept private. This paper presents an approach for privacy-preserving authentication in a VANET. Our hybrid approach combines the useful features of both the pseudonym-based approaches and the group signature-based approaches to preclude their respective drawbacks. The proposed approach neither requires a vehicle to manage a certificate revocation list, nor indulges vehicles in any group management. The proposed approach utilizes efficient and lightweight pseudonyms that are not only used for message authentication, but also serve as a trapdoor in order to provide conditional anonymity. We present various attack scenarios that show the resilience of the proposed approach against various security and privacy threats. We also provide analysis of computational and communication overhead to show the efficiency of the proposed technique. In addition, we carry out extensive simulations in order to present a detailed network performance analysis. The results show the feasibility of our proposed approach in terms of end-to-end delay and packet delivery ratio.",
"title": ""
},
{
"docid": "c26a1d7fc8e632e9e7d3ea149bc80ea0",
"text": "Pain associated with integumentary wounds is highly prevalent, yet it remains an area of significant unmet need within health care. Currently, systemically administered opioids are the mainstay of treatment. However, recent publications are casting opioids in a negative light given their high side effect profile, inhibition of wound healing, and association with accidental overdose, incidents that are frequently fatal. Thus, novel analgesic strategies for wound-related pain need to be investigated. The ideal methods of pain relief for wound patients are modalities that are topical, lack systemic side effects, noninvasive, self-administered, and display rapid onset of analgesia. Extracts derived from the cannabis plant have been applied to wounds for thousands of years. The discovery of the human endocannabinoid system and its dominant presence throughout the integumentary system provides a valid and logical scientific platform to consider the use of topical cannabinoids for wounds. We are reporting a prospective case series of three patients with pyoderma gangrenosum that were treated with topical medical cannabis compounded in nongenetically modified organic sunflower oil. Clinically significant analgesia that was associated with reduced opioid utilization was noted in all three cases. Topical medical cannabis has the potential to improve pain management in patients suffering from wounds of all classes.",
"title": ""
},
{
"docid": "6f6cd699a625748522e5e10b6e310e69",
"text": "Research on organizational justice has focused primarily on the receivers of just and unjust treatment. Little is known about why managers adhere to or violate rules of justice in the first place. The authors introduce a model for understanding justice rule adherence and violation. They identify both cognitive motives and affective motives that explain why managers adhere to and violate justice rules. They also draw distinctions among the justice rules by specifying which rules offer managers more or less discretion in their execution. They then describe how motives and discretion interact to influence justice-relevant actions. Finally, the authors incorporate managers' emotional reactions to consider how their actions may change over time. Implications of the model for theory, research, and practice are discussed.",
"title": ""
},
{
"docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea",
"text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.",
"title": ""
},
{
"docid": "3ee39231fc2fbf3b6295b1b105a33c05",
"text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.",
"title": ""
},
{
"docid": "5e806d14356729d7c96dcf2d97ba9c30",
"text": "Recently, a variety of bioactive protein drugs have been available in large quantities as a result of advances in biotechnology. Such availability has prompted development of long-term protein delivery systems. Biodegradable microparticulate systems have been used widely for controlled release of protein drugs for days and months. The most widely used biodegradable polymer has been poly(d,l-lactic-co-glycolic acid) (PLGA). Protein-containing microparticles are usually prepared by the water/oil/water (W/O/W) double emulsion method, and variations of this method, such as solid/oil/water (S/O/W) and water/oil/oil (W/O/O), have also been used. Other methods of preparation include spray drying, ultrasonic atomization, and electrospray methods. The important factors in developing biodegradable microparticles for protein drug delivery are protein release profile (including burst release, duration of release, and extent of release), microparticle size, protein loading, encapsulation efficiency, and bioactivity of the released protein. Many studies used albumin as a model protein, and thus, the bioactivity of the release protein has not been examined. Other studies which utilized enzymes, insulin, erythropoietin, and growth factors have suggested that the right formulation to preserve bioactivity of the loaded protein drug during the processing and storage steps is important. The protein release profiles from various microparticle formulations can be classified into four distinct categories (Types A, B, C, and D). The categories are based on the magnitude of burst release, the extent of protein release, and the protein release kinetics followed by the burst release. The protein loading (i.e., the total amount of protein loaded divided by the total weight of microparticles) in various microparticles is 6.7+/-4.6%, and it ranges from 0.5% to 20.0%. Development of clinically successful long-term protein delivery systems based on biodegradable microparticles requires improvement in the drug loading efficiency, control of the initial burst release, and the ability to control the protein release kinetics.",
"title": ""
},
{
"docid": "3d84f5f8322737bf8c6f440180e07660",
"text": "Incremental Dialog Processing (IDP) enables Spoken Dialog Systems to gradually process minimal units of user speech in order to give the user an early system response. In this paper, we present an application of IDP that shows its effectiveness in a task-oriented dialog system. We have implemented an IDP strategy and deployed it for one month on a real-user system. We compared the resulting dialogs with dialogs produced over the previous month without IDP. Results show that the incremental strategy significantly improved system performance by eliminating long and often off-task utterances that generally produce poor speech recognition results. User behavior is also affected; the user tends to shorten utterances after being interrupted by the system.",
"title": ""
},
{
"docid": "90125582272e3f16a34d5d0c885f573a",
"text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.",
"title": ""
},
{
"docid": "cd48c6b722f8e88f0dc514fcb6a0d890",
"text": "Multi-tier data-intensive applications are widely deployed in virtualized data centers for high scalability and reliability. As the response time is vital for user satisfaction, this requires achieving good performance at each tier of the applications in order to minimize the overall latency. However, in such virtualized environments, each tier (e.g., application, database, web) is likely to be hosted by different virtual machines (VMs) on multiple physical servers, where a guest VM is unaware of changes outside its domain, and the hypervisor also does not know the configuration and runtime status of a guest VM. As a result, isolated virtualization domains lend themselves to performance unpredictability and variance. In this paper, we propose IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers. We present several case studies to demonstrate that IOrchestra is able to address numerous drawbacks of the current practice and improve the I/O latency of various distributed cloud applications by up to 31%.",
"title": ""
}
] |
scidocsrr
|
fc9c499eef3971b044d2b305bb5624f7
|
Analysis of time-frequency representations for musical onset detection with convolutional neural network
|
[
{
"docid": "01bb8e6af86aa1545958a411653e014c",
"text": "Estimating the tempo of a musical piece is a complex problem, which has received an increasing amount of attention in the past few years. The problem consists of estimating the number of beats per minute (bpm) at which the music is played and identifying exactly when these beats occur. Commercial devices already exist that attempt to extract a musical instrument digital interface (MIDI) clock from an audio signal, indicating both the tempo and the actual location of the beat. Such MIDI clocks can then be used to synchronize other devices (such as drum machines and audio effects) to the audio source, enabling a new range of \" beat-synchronized \" audio processing. Beat detection can also simplify the usually tedious process of manipulating audio material in audio-editing software. Cut and paste operations are made considerably easier if markers are positioned at each beat or at bar boundaries. Looping a drum track over two bars becomes trivial once the location of the beats is known. A third range of applications is the fairly new area of automatic playlist generation, where a computer is given the task to choose a series of audio tracks from a track database in a way similar to what a human deejay would do. The track tempo is a very important selection criterion in this context , as deejays will tend to string tracks with similar tempi back to back. Furthermore, deejays also tend to perform beat-synchronous crossfading between successive tracks manually, slowing down or speeding up one of the tracks so that the beats in the two tracks line up exactly during the crossfade. This can easily be done automatically once the beats are located in the two tracks. The tempo detection systems commercially available appear to be fairly unsophisticated, as they rely mostly on the presence of a strong and regular bass-drum kick at every beat, an assumption that holds mostly with modern musical genres such as techno or drums and bass. For music with a less pronounced tempo such techniques fail miserably and more sophisticated algorithms are needed. This paper describes an off-line tempo detection algorithm , able to estimate a time-varying tempo from an audio track stored, for example, on an audio CD or on a computer hard disk. The technique works in three successive steps: 1) an \" energy flux \" signal is extracted from the track, 2) at each tempo-analysis time, several …",
"title": ""
},
{
"docid": "10c6b59c20f5745104e74eeaa0dfed13",
"text": "In this paper, we evaluate various onset detection algorithms in terms of their online capabilities. Most methods use some kind of normalization over time, which renders them unusable for online tasks. We modified existing methods to enable online application and evaluated their performance on a large dataset consisting of 27,774 annotated onsets. We focus particularly on the incorporated preprocessing and peak detection methods. We show that, with the right choice of parameters, the maximum achievable performance is in the same range as that of offline algorithms, and that preprocessing can improve the results considerably. Furthermore, we propose a new onset detection method based on the common spectral flux and a new peak-picking method which outperforms traditional methods both online and offline and works with audio signals of various volume levels.",
"title": ""
}
] |
[
{
"docid": "b63c963fa69048379011049769b5632f",
"text": "This article revisits the relationship between income per capita and civil conict. We establish that the empirical literature identi
es two di¤erent patterns. First, poor countries have a higher propensity to su¤er from civil war. Second, civil war occurs when countries su¤er negative income shocks. In a formal model we examine an explanation often suggested in the informal literature: civil wars occur in poor countries because the opportunity cost of
ghting is small. We show that while this explanation fails to make sense of the
rst empirical pattern, it provides a coherent theoretical basis for the second. We then enrich the model to allow for private imperfect information about the state of the economy and show that mutual fears exacerbate the problem caused by negative income shocks. Previous versions of this paper were presented at the Political Science Departments of MIT, Harvard, Princeton, Columbia, UC Berkeley and Washington University as well as conict conferences at Northwestern University and NYU. We are thankful for all the comments received and for helpful conversations with Edward Miguel and Robert Powell. All remaining errors are, of course, our own. yWoodrow Wilson School of Public and International A¤airs and Department of Economics. Chassang@princeton.edu zSTICERD and Department of Economics. G.padro@lse.ac.uk",
"title": ""
},
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "5d446e65933125fb35201aa51ca9530d",
"text": "Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs significant number of pixellevel annotated data, which is often unavailable. To address this lack, in this paper, we leverage, on one hand, massive amount of available unlabeled or weakly labeled data, and on the other hand, non-real images created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework – based on Generative Adversarial Networks (GANs) – which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from theK possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, enabling a bottom-up clustering process, which, in turn, improves multiclass pixel classification. To ensure higher quality of generated images for GANs with consequent improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We tested our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance also compared to state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "ecb4ae6bbb10fb1194ee22d3f893df00",
"text": "The problem of modeling the continuously changing trends in finance markets and generating real-time, meaningful predictions about significant changes in those markets has drawn considerable interest from economists and data scientists alike. In addition to traditional market indicators, growth of varied social media has enabled economists to leverage microand real-time indicators about factors possibly influencing the market, such as public emotion, anticipations and behaviors. We propose several specific market related features that can be mined from varied sources such as news, Google search volumes and Twitter. We further investigate the correlation between these features and financial market fluctuations. In this paper, we present a Delta Naive Bayes (DNB) approach to generate prediction about financial markets. We present a detailed prospective analysis of prediction accuracy generated from multiple, combined sources with those generated from a single source. We find that multi-source predictions consistently outperform single-source predictions, even though with some limitations.",
"title": ""
},
{
"docid": "bdb9f3822ef89276b1aa1d493d1f9379",
"text": "Individual performance is of high relevance for organizations and individuals alike. Showing high performance when accomplishing tasks results in satisfaction, feelings of selfefficacy and mastery (Bandura, 1997; Kanfer et aL, 2005). Moreover, high performing individuals get promoted, awarded and honored. Career opportunities for individuals who perform well are much better than those of moderate or low performing individuals (Van Scotter et aI., 2000). This chapter summarizes research on individual performance and addresses performance as a multi-dimensional and dynamic concept. First, we define the concept of performance, next we discuss antecedents of between-individual variation of performance, and describe intraindividual change and variability in performance, and finally, we present a research agenda for future research.",
"title": ""
},
{
"docid": "18323d509e46b61e881e653d32f722e2",
"text": "This article describes a body area network (BAN) for measuring an electrocardiogram (ECG) signal and transmitting it to a smartphone via Bluetooth for data analysis. The BAN uses a specially designed planar inverted F-antenna (PIFA) with a small form factor, realizable with low-fabricationcost techniques. Furthermore, due to the human body's electrical properties, the antenna was designed to enable surface-wave propagation around the body. The system utilizes the user's own smartphone for data processing, and the built-in communications can be used to raise an alarm if a heart attack is detected. This is managed by an application for Android smartphones that has been developed for this system. The good functionality of the system was confirmed in three real-life user case scenarios.",
"title": ""
},
{
"docid": "06c23ce33938e509e723bfea54b6de3b",
"text": "s of Invited Talks Bottom-Up! Social Knowledge Sharing in DLR Ruediger Suess and Uwe Knodt DLR German Aerospace Centre, Cologne, Germany As a research organisation, the German Aerospace Centre (DLR) relies on creative and disruptive ideas to create new knowledge efficiently. But how can those new ideas be raised when you see that there is a lot of inefficiency in the knowledge flow? Three years ago Mr. Uwe Knodt started to investigate the knowledge management processes of DLR. For this purpose a new internal project “Establishing an integrated knowledge management system (EIWis)” was launched. By conducting surveys on employees’ needs concerning knowledge, he found out that the knowledge processes were not primarily driven by technology but especially by the way people react and interchange information with each other. The information technology is not the key to a successful knowledge management, but the people are. Following that, the improvement of knowledge processes can be done by bringing the right people together whether online or offline – in order to share their knowledge and develop new ideas. Whenever technology is used to enhance these knowledge processes, it has to be in a social way to improve the bottom-up knowledge flow. Mr Ruediger Suess integrated EIWis into his strategic project portfolio. He connected other strategic projects with knowledge management and reported the results directly to the board. He will show how an organized project portfolio can help to reach the strategic goals. Uwe and Ruediger will also show two examples. The first example is the DLR-Wiki, in which each employee can easily share his/her own knowledge with others inside DLR. The second example is the Knowledge Sharing Meeting, a format of collaboration workshops aiming at creating communities of experts by using a bottom-up approach with the acceptance of executive staff. An overview of the other knowledge management activities at DLR will also be given. Mr Ruediger Suess is project portfolio manager at the Corporate Strategy and Alliances division at the headquarters of DLR – the German Aerospace Center, located in Cologne /Germany. In his current function. As project portfolio manager Ruediger Suess coordinates the update of the corporate strategy at DLR and manages the portfolio of strategic projects for the implementation of the corporate strategy. Mr Uwe Knodt is Project Manager for Knowledge Management, DLR. Knowledge Management and Digital Diagrammatisation of Innovative Technical",
"title": ""
},
{
"docid": "b6ef6733f10fd282fb5aefc1f676b51c",
"text": "An electronic business model is an important baseline for the development of e-commerce system applications. Essentially, it provides the design rationale for e-commerce systems from the business point of view. However, how an e-business model must be defined and specified is a largely open issue. Business decision makers tend to use the notion in a highly informal way, and usually there is a big gap between the business view and that of IT developers. Nevertheless, we show that conceptual modelling techniques from IT provide very useful tools for precisely pinning down what e-business models actually are, as well as for their structured specification. We therefore present a (lightweight) ontology of what should be in an e-business model. The key idea we propose and develop is that an e-business model ontology centers around the core concept of value, and expresses how value is created, interpreted and exchanged within a multi-party stakeholder network. Our e-business model ontology is part of a wider methodology for e-business modelling, called e3-valueTM , that is currently under development. It is based on a variety of industrial applications we are involved in, and it is illustrated by discussing a free Internet access service as an example.",
"title": ""
},
{
"docid": "354e3d7034f93ff4e319567ce1508680",
"text": "In this paper, we discuss, from an experimental point of view, the use of different control strategies for the trajectory tracking control of an industrial selective compliance assembly robot arm robot, which is one of the most employed manipulators in industrial environments, especially for assembly tasks. Specifically, we consider decentralized controllers such as proportional–integral–derivative-based and sliding-mode ones and model-based controllers such as the classical computed-torque one and a neural-network-based controller. A simple procedure for the estimation of the dynamic model of the manipulator is given. Experimental results provide a detailed framework about the cost/benefit ratio regarding the use of the different controllers, showing that the performance obtained with decentralized controllers may suffice in a large number of industrial applications, but in order to achieve low tracking errors also for high-speed trajectories, it might be convenient to adopt a neural-network-based control scheme, whose implementation is not particularly demanding.",
"title": ""
},
{
"docid": "6bdb8048915000b2d6c062e0e71b8417",
"text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.",
"title": ""
},
{
"docid": "74af567f4b0257dc12c3346146c0f46c",
"text": "This paper presents the experimental data of human mechanical impedance properties (HMIPs) of the arms measured in steering operations according to the angle of a steering wheel (limbs posture) and the steering torque (muscle cocontraction). The HMIP data show that human stiffness/viscosity has the minimum/maximum value at the neutral angle of the steering wheel in relax (standard condition) and increases/decreases for the amplitude of the steering angle and the torque, and that the stability of the arms' motion in handling the steering wheel becomes high around the standard condition. Next, a novel methodology for designing an adaptive steering control system based on the HMIPs of the arms is proposed, and the effectiveness was then demonstrated via a set of double-lane-change tests, with several subjects using the originally developed stationary driving simulator and the 4-DOF driving simulator with a movable cockpit.",
"title": ""
},
{
"docid": "e6260a482e1ba33e93c555b7ceddb625",
"text": "OBJECTIVES\nTo investigate the prevalence and correlates of smartphone addiction among university students in Saudi Arabia.\n\n\nMETHODS\nThis cross-sectional study was conducted in King Saud University, Riyadh, Kingdom of Saudi Arabia between September 2014 and March 2015. An electronic self administered questionnaire and the problematic use of mobile phones (PUMP) Scale were used. \n\n\nRESULTS\nOut of 2367 study subjects, 27.2% stated that they spent more than 8 hours per day using their smartphones. Seventy-five percent used at least 4 applications per day, primarily for social networking and watching news. As a consequence of using the smartphones, at least 43% had decrease sleeping hours, and experienced a lack of energy the next day, 30% had a more unhealthy lifestyle (ate more fast food, gained weight, and exercised less), and 25% reported that their academic achievement been adversely affected. There are statistically significant positive relationships among the 4 study variables, consequences of smartphone use (negative lifestyle, poor academic achievement), number of hours per day spent using smartphones, years of study, and number of applications used, and the outcome variable score on the PUMP. The mean values of the PUMP scale were 60.8 with a median of 60. \n\n\nCONCLUSION\nUniversity students in Saudi Arabia are at risk of addiction to smartphones; a phenomenon that is associated with negative effects on sleep, levels of energy, eating habits, weight, exercise, and academic performance.",
"title": ""
},
{
"docid": "f017d6dff147f00fcbb2356e4fd9e06f",
"text": "In this paper, an index based on customer perspective is proposed for evaluating transit service quality. The index, named Heterogeneous Customer Satisfaction Index, is inspired by the traditional Customer Satisfaction Index, but takes into account the heterogeneity among the user judgments about the different service aspects. The index allows service quality to be monitored, the causes generating customer satisfaction/dissatisfaction to be identified, and the strategies for improving the service quality to be defined. The proposed methodologies show some advantages compared to the others adopted for measuring service quality, because it can be easily applied by the transit operators. Introduction Transit service quality is an aspect markedly influencing travel user choices. Customers who have a good experience with transit will probably use transit services again, while customers who experience problems with transit may not use transit services the next time. For this reason, improving service quality is important for customizing habitual travellers and for attracting new users. Moreover, the need for supplying services characterized by high levels of quality guarantees competition among transit agencies, and, consequently, the user takes advantage of Journal of Public Transportation, Vol. 12, No. 3, 2009 22 better services. To achieve these goals, transit agencies must measure their performance. Customer satisfaction represents a measure of company performance according to customer needs (Hill et al. 2003); therefore, the measure of customer satisfaction provides a service quality measure. Customers express their points of view about the services by providing judgments on some service aspects by means of ad hoc experimental sample surveys, known in the literature as “customer satisfaction surveys.” The aspects generally describing transit services can be distinguished into the characteristics that more properly describe the service (e.g., service frequency), and less easily measurable characteristics that depend more on customer tastes (e.g., comfort). In the literature, there are many studies about transit service quality. Examples of the most recent research are reported in TRB (2003a, 2003b), Eboli and Mazzulla (2007), Tyrinopoulos and Antoniou (2008), Iseki and Taylor (2008), and Joewono and Kubota (2007). In these studies, different attributes determining transit service quality are discussed; the main service aspects characterizing a transit service include service scheduling and reliability, service coverage, information, comfort, cleanliness, and safety and security. Service scheduling can be defined by service frequency (number of runs per hour or per day) and service time (time during which the service is available). Service reliability concerns the regularity of runs that are on schedule and on time; an unreliable service does not permit user travel times to be optimized. Service coverage concerns service availability in the space and is expressed through line path characteristics, number of stops, distance between stops, and accessibility of stops. Information consists of indications about departure and arrival scheduled times of the runs, boarding/alighting stop location, ticket costs, and so on. Comfort refers to passenger personal comfort while transit is used, including climate control, seat comfort, ride comfort including the severity of acceleration and braking, odors, and vehicle noise. Cleanliness refers to the internal and external cleanliness of vehicles and cleanliness of terminals and stops. Safety concerns the possibility that users can be involved in an accident, and security concerns personal security against crimes. Other service aspects characterizing transit services concern fares, personnel appearance and helpfulness, environmental protection, and customer services such ease of purchasing tickets and administration of complaints. The objective of this research is to provide a tool for measuring the overall transit service quality, taking into account user judgments about different service aspects. A New Customer Satisfaction Index for Evaluating Transit Service Quality 23 A synthetic index of overall satisfaction is proposed, which easily can be used by transit agencies for monitoring service performance. In the next section, a critical review of indexes for measuring service quality from a user perspective is made; observations and remarks emerge from the comparison among the indexes analysed. Because of the disadvantages of the indexes reported in the literature, a new index is proposed. The proposed methodology is applied by using experimental data collected by a customer satisfaction survey of passengers of a suburban transit service. The obtained results are discussed at the end of the paper. Customer Satisfaction Indexes The concept of customer satisfaction as a measure of perceived service quality was introduced in market research. In this field, many customer satisfaction techniques have been developed. The best known and most widely applied technique is the ServQual method, proposed by Parasuraman et al. (1985). The ServQual method introduced the concept of customer satisfaction as a function of customer expectations (what customers expect from the service) and perceptions (what customers receive). The method was developed to assess customer perceptions of service quality in retail and service organizations. In the method, 5 service quality dimensions and 22 items for measuring service quality are defined. Service quality dimensions are tangibles, reliability, responsiveness, assurance, and empathy. The method is in the form of a questionnaire that uses a Likert scale on seven levels of agreement/disagreement (from “strongly disagree” to “strongly agree”). ServQual provides an index calculated through the difference between perception and expectation rates expressed for the items, weighted as a function of the five service quality dimensions embedding the items. Some variations of this method were introduced in subsequent years. For example, Cronin and Taylor (1994) introduced the ServPerf method, and Teas (1993) proposed a model named Normed Quality (NQ). Although ServQual represents the most widely adopted method for measuring service quality, the adopted scale of measurement for capturing customer judgments has some disadvantages in obtaining an overall numerical measure of service quality; in fact, to calculate an index, the analyst is forced to assign a numerical code to each level of judgment. In this way, equidistant numbers are assigned to each qualitative point of the scale; this operation presumes that the distances between two consecutive levels of judgment expressed by the customers have the same size. Journal of Public Transportation, Vol. 12, No. 3, 2009 24 A number of both national and international indexes also based on customer perceptions and expectations have been introduced in the last decade. For the most part, these satisfaction indexes are embedded within a system of cause-and-effect relationships or satisfaction models. The models also contain latent or unobservable variables and provide a reliable satisfaction index (Johnson et al. 2001). The Swedish Customer Satisfaction Barometer (SCSB) was established in 1989 and is the first national customer satisfaction index for domestically purchased and consumed products and services (Fornell 1992). The American Customer Satisfaction Index (ACSI) was introduced in the fall of 1994 (Fornell et al. 1996). The Norwegian Customer Satisfaction Barometer (NCSB) was introduced in 1996 (Andreassen and Lervik 1999; Andreassen and Lindestad 1998). The most recent development among these indexes is the European Customer Satisfaction Index (ECSI) (Eklof 2000). The original SCSB model is based on customer perceptions and expectations regarding products or services. All the other models are based on the same concepts, but they differ from the original regarding the variables considered and the cause-and-effect relationships introduced. The models from which these indexes are derived have a very complex structure. In addition, model coefficient estimation needs of large quantities of experimental data and the calibration procedure are not easily workable. For this reason, this method is not very usable by transit agencies, particularly for monitoring service quality. More recently, an index based on discrete choice models and random utility theory has been introduced. The index, named Service Quality Index (SQI), is calculated by the utility function of a choice alternative representing a service (Hensher and Prioni 2002). The user makes a choice between the service habitually used and hypothetical services. Hypothetical services are defined through Stated Preferences (SP) techniques by varying the level of quality of aspects characterizing the service. Habitual service is described by the user by assigning a value to each service aspect. The design of this type of SP experiments is generally very complex; an example of an SP experimental design was introduced by Eboli and Mazzulla (2008a). SQI was firstly calculated by a Multinomial Logit model to evaluate the level of quality of transit services. Hierarchical Logit models were introduced for calculating SQI by Hensher et al. (2003) and Marcucci and Gatta (2007). Mixed Logit models were introduced by Hensher (2001) and Eboli and Mazzulla (2008b). SQI includes, indirectly, the concept of satisfaction as a function of customer expectations and perceptions. The calculation of the indexes following approaches different from SQI presumes the use of customer judgments in terms of rating. To the contrary, SQI is based on choice data; nevertheless, by choosing a service, the user indirectly A New Customer Satisfaction Index for Evaluating Transit Service Quality 25 expresses a judgment of importance on the service aspects defining the services. In addition, the user expres",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
},
{
"docid": "c8dc06de68e4706525e98f444e9877e4",
"text": "This study used two field trials with 5 and 34 years of liming histories, respectively, and aimed to elucidate the long-term effect of liming on soil organic C (SOC) in acid soils. It was hypothesized that long-term liming would increase SOC concentration, macro-aggregate stability and SOC concentration within aggregates. Surface soils (0–10 cm) were sampled and separated into four aggregate-size classes: large macro-aggregates (>2 mm), small macro-aggregates (0.25–2 mm), micro-aggregates (0.053–0.25 mm) and silt and clay fraction (<0.053 mm) by wet sieving, and the SOC concentration of each aggregate-size was quantified. Liming decreased SOC in the bulk soil and in aggregates as well as macro-aggregate stability in the low-input and cultivated 34-year-old trial. In contrast, liming did not significantly change the concentration of SOC in the bulk soil or in aggregates but improved macro-aggregate stability in the 5-year-old trial under undisturbed unimproved pastures. Furthermore, the single application of lime to the surface soil increased pH in both topsoil (0–10 cm) and subsurface soil (10–20 cm) and increased K2SO4-extractable C, microbial biomass C (Cmic) and basal respiration (CO2) in both soil layers of both lime trials. Liming increased the percentage of SOC present as microbial biomass C (Cmic/Corg) and decreased the respiration rate per unit biomass (qCO2). The study concludes that despite long-term liming decreased total SOC in the low-input systems, it increased labile C pools and the percentage of SOC present as microbial biomass C.",
"title": ""
},
{
"docid": "176dc97bd2ce3c1fd7d3a8d6913cff70",
"text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "129dd084e485da5885e2720a4bddd314",
"text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.",
"title": ""
},
{
"docid": "1962428380a7ccb6e64d0c7669736e9d",
"text": "This target article presents an integrated evolutionary model of the development of attachment and human reproductive strategies. It is argued that sex differences in attachment emerge in middle childhood, have adaptive significance in both children and adults, and are part of sex-specific life history strategies. Early psychosocial stress and insecure attachment act as cues of environmental risk, and tend to switch development towards reproductive strategies favoring current reproduction and higher mating effort. However, due to sex differences in life history trade-offs between mating and parenting, insecure males tend to adopt avoidant strategies, whereas insecure females tend to adopt anxious/ambivalent strategies, which maximize investment from kin and mates. Females are expected to shift to avoidant patterns when environmental risk is more severe. Avoidant and ambivalent attachment patterns also have different adaptive values for boys and girls, in the context of same-sex competition in the peer group: in particular, the competitive and aggressive traits related to avoidant attachment can be favored as a status-seeking strategy for males. Finally, adrenarche is proposed as the endocrine mechanism underlying the reorganization of attachment in middle childhood, and the implications for the relationship between attachment and sexual development are explored. Sex differences in the development of attachment can be fruitfully integrated within the broader framework of adaptive plasticity in life history strategies, thus contributing to a coherent evolutionary theory of human development.",
"title": ""
},
{
"docid": "40f1a09787491fec99280870c98b437d",
"text": "We present a scalable Bayesian multi-label learning model based on learning lowdimensional label embeddings. Our model assumes that each label vector is generated as a weighted combination of a set of topics (each topic being a distribution over labels), where the combination weights (i.e., the embeddings) for each label vector are conditioned on the observed feature vector. This construction, coupled with a Bernoulli-Poisson link function for each label of the binary label vector, leads to a model with a computational cost that scales in the number of positive labels in the label matrix. This makes the model particularly appealing for real-world multi-label learning problems where the label matrix is usually very massive but highly sparse. Using a data-augmentation strategy leads to full local conjugacy in our model, facilitating simple and very efficient Gibbs sampling, as well as an Expectation Maximization algorithm for inference. Also, predicting the label vector at test time does not require doing an inference for the label embeddings and can be done in closed form. We report results on several benchmark data sets, comparing our model with various state-of-the art methods.",
"title": ""
}
] |
scidocsrr
|
492b93c814e35c4f7ac925ca8fdd6985
|
Consensus Protocols for Networks of Dynamic Agents
|
[
{
"docid": "4c290421dc42c3a5a56c7a4b373063e5",
"text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.",
"title": ""
}
] |
[
{
"docid": "057069a06621b879f88c6d09f8867f77",
"text": "Nowadays, the railway industry is in a position where it is able to exploit the opportunities created by the IIoT (Industrial Internet of Things) and enabling communication technologies under the paradigm of Internet of Trains. This review details the evolution of communication technologies since the deployment of GSM-R, describing the main alternatives and how railway requirements, specifications and recommendations have evolved over time. The advantages of the latest generation of broadband communication systems (e.g., LTE, 5G, IEEE 802.11ad) and the emergence of Wireless Sensor Networks (WSNs) for the railway environment are also explained together with the strategic roadmap to ensure a smooth migration from GSM-R. Furthermore, this survey focuses on providing a holistic approach, identifying scenarios and architectures where railways could leverage better commercial IIoT capabilities. After reviewing the main industrial developments, short and medium-term IIoT-enabled services for smart railways are evaluated. Then, it is analyzed the latest research on predictive maintenance, smart infrastructure, advanced monitoring of assets, video surveillance systems, railway operations, Passenger and Freight Information Systems (PIS/FIS), train control systems, safety assurance, signaling systems, cyber security and energy efficiency. Overall, it can be stated that the aim of this article is to provide a detailed examination of the state-of-the-art of different technologies and services that will revolutionize the railway industry and will allow for confronting today challenges.",
"title": ""
},
{
"docid": "37e82a54df827ddcfdb71fef7c12a47b",
"text": "We tackle a task where an agent learns to navigate in a 2D maze-like environment called XWORLD. In each session, the agent perceives a sequence of raw-pixel frames, a natural language command issued by a teacher, and a set of rewards. The agent learns the teacher’s language from scratch in a grounded and compositional manner, such that after training it is able to correctly execute zero-shot commands: 1) the combination of words in the command never appeared before, and/or 2) the command contains new object concepts that are learned from another task but never learned from navigation. Our deep framework for the agent is trained end to end: it learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and the action module that outputs actions. The zero-shot learning capability of our framework results from its compositionality and modularity with parameter tying. We visualize the intermediate outputs of the framework, demonstrating that the agent truly understands how to solve the problem. We believe that our results provide some preliminary insights on how to train an agent with similar abilities in a 3D environment.",
"title": ""
},
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "2e40cdb0416198c1ec986e0d3da47fd1",
"text": "The slotted-page structure is a database page format commonly used for managing variable-length records. In this work, we develop a novel \"failure-atomic slotted page structure\" for persistent memory that leverages byte addressability and durability of persistent memory to minimize redundant write operations used to maintain consistency in traditional database systems. Failure-atomic slotted paging consists of two key elements: (i) in-place commit per page using hardware transactional memory and (ii) slot header logging that logs the commit mark of each page. The proposed scheme is implemented in SQLite and compared against NVWAL, the current state-of-the-art scheme. Our performance study shows that our failure-atomic slotted paging shows optimal performance for database transactions that insert a single record. For transactions that touch more than one database page, our proposed slot-header logging scheme minimizes the logging overhead by avoiding duplicating pages and logging only the metadata of the dirty pages. Overall, we find that our failure-atomic slotted-page management scheme reduces database logging overhead to 1/6 and improves query response time by up to 33% compared to NVWAL.",
"title": ""
},
{
"docid": "6ce28e4fe8724f685453a019f253b252",
"text": "This paper is focused on receivables management and possibilities how to use available information technologies. The use of information technologies should make receivables management easier on one hand and on the other hand it makes the processes more efficient. Finally it decreases additional costs and losses connected with enforcing receivables when defaulting debts occur. The situation of use of information technologies is different if the subject is financial or nonfinancial institution. In the case of financial institution loans providing is core business and the processes and their technical support are more sophisticated than in the case of non-financial institutions whose loan providing as invoices is just a supplement to their core business activities. The paper shows use of information technologies in individual cases but it also emphasizes the use of general results for further decision making process. Results of receivables management are illustrated on the data of the Czech Republic.",
"title": ""
},
{
"docid": "6ac996c20f036308f36c7b667babe876",
"text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.",
"title": ""
},
{
"docid": "f4aa06f7782a22eeb5f30d0ad27eaff9",
"text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.",
"title": ""
},
{
"docid": "34382f9716058d727f467716350788a7",
"text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "d83e03beb3ca6e9b02848fd8ad94591e",
"text": "Smartphones and tablets are becoming less expensive and many students already bring them to classes. The increased availability of smartphones and tablets with Internet connectivity and increasing power computing makes possible the use of augmented reality (AR) applications in these mobile devices. This makes it possible for a teacher to develop educational activities that can take advantage of the augmented reality technologies for improving learning activities. The use of information technology made many changes in the way of teaching and learning. We believe that the use of augmented reality will change significantly the teaching activities by enabling the addition of supplementary information that is seen on a mobile device. In this paper, we present several educational activities created using free augmented reality tools that do not require programming knowledge to be used by any teacher. We cover the marker and marker less based augmented reality technologies to show how we can create learning activities to visualize augmented information like animations and 3D objects that help students understand the educational content. There are currently many augmented reality applications. We looked to the most popular augmented-reality eco-systems. Our purpose was to find AR systems that can be used in daily learning activities. For this reason, they must be user friendly, since they are going to be used by teachers that in general do not have programming knowledge. Additionally, we were interested in using augmented reality applications that are open source or free.",
"title": ""
},
{
"docid": "101ecfb3d6a20393d147cd2061414369",
"text": "In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.",
"title": ""
},
{
"docid": "624e78153b58a69917d313989b72e6bf",
"text": "In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bba979cd5d69dac380ba1023441460d3",
"text": "This paper presents a model of a particular class of a convertible MAV with fixed wings. This vehicle can operate as a helicopter as well as a conventional airplane, i.e. the aircraft is able to switch their flight configuration from hover to level flight and vice versa by means of a transition maneuver. The paper focuses on finding a controller capable of performing such transition via the tilting of their four rotors. The altitude should remain on a predefined value throughout the transition stage. For this purpose a nonlinear control strategy based on saturations and Lyapunov design is given. The use of this control law enables to make the transition maneuver while maintaining the aircraft in flight. Numerical results are presented, showing the effectiveness of the proposed methodology to deal with the transition stage.",
"title": ""
},
{
"docid": "1b78fd9e2d90393ee877c49f582d23ee",
"text": "Many “big data” applications need to act on data arriving in real time. However, current programming models for distributed stream processing are relatively low-level, often leaving the user to worry about consistency of state across the system and fault recovery. Furthermore, the models that provide fault recovery do so in an expensive manner, requiring either hot replication or long recovery times. We propose a new programming model, discretized streams (D-Streams), that offers a high-level functional API, strong consistency, and efficient fault recovery. D-Streams support a new recovery mechanism that improves efficiency over the traditional replication and upstream backup schemes in streaming databases— parallel recovery of lost state—and unlike previous systems, also mitigate stragglers. We implement D-Streams as an extension to the Spark cluster computing engine that lets users seamlessly intermix streaming, batch and interactive queries. Our system can process over 60 million records/second at sub-second latency on 100 nodes.",
"title": ""
},
{
"docid": "118738ca4b870e164c7be53e882a9ab4",
"text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470",
"title": ""
},
{
"docid": "6c584b512e51b3dd4f16a9c753ac2fc5",
"text": "Cloud computing and virtualization technologies play important roles in modern service-oriented computing paradigm. More conventional services are being migrated to virtualized computing environments to achieve flexible deployment and high availability. We introduce a schedule algorithm based on fuzzy inference system (FIS), for global container resource allocation by evaluating nodes' statuses using FIS. We present the approaches to build containerized test environment and validates the effectiveness of the resource allocation policies by running sample use cases. Experiment results show that the presented infrastructure and schema derive optimal resource configurations and significantly improves the performance of the cluster.",
"title": ""
},
{
"docid": "d36021ff647a2f2c74dd35a847847a09",
"text": "An ontology is a crucial factor for the success of the Semantic Web and other knowledge-based systems in terms of share and reuse of domain knowledge. However, there are a few concrete ontologies within actual knowledge domains including learning domains. In this paper, we develop an ontology which is an explicit formal specification of concepts and semantic relations among them in philosophy. We call it a philosophy ontology. Our philosophy is a formal specification of philosophical knowledge including knowledge of contents of classical texts of philosophy. We propose a methodology, which consists of detailed guidelines and templates, for constructing text-based ontology. Our methodology consists of 3 major steps and 14 minor steps. To implement the philosophy ontology, we develop an ontology management system based on Topic Maps. Our system includes a semi-automatic translator for creating Topic Map documents from the output of conceptualization steps and other tools to construct, store, retrieve ontologies based on Topic Maps. Our methodology and tools can be applied to other learning domain ontologies, such as history, literature, arts, and music. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29cceb730e663c08e20107b6d34ced8b",
"text": "Cumulative citation recommendation refers to the task of filtering a time-ordered corpus for documents that are highly relevant to a predefined set of entities. This task has been introduced at the TREC Knowledge Base Acceleration track in 2012, where two main families of approaches emerged: classification and ranking. In this paper we perform an experimental comparison of these two strategies using supervised learning with a rich feature set. Our main finding is that ranking outperforms classification on all evaluation settings and metrics. Our analysis also reveals that a ranking-based approach has more potential for future improvements.",
"title": ""
},
{
"docid": "a30d9dbac3f0d988fd15884cda3ecf93",
"text": "In this review article, the authors have summarized the published literature supporting the value of video game use on the following topics: improvement of cognitive functioning in older individuals, potential reasons for the positive effects of video game use in older age, and psychological factors related to using video games in older age. It is important for geriatric researchers and practitioners to identify approaches and interventions that minimize the negative effects of the various changes that occur within the aging body. Generally speaking, biological aging results in a decline of both physical and cognitive functioning.1–3 However, a growing body of literature indicates that taking part in physically and/or mentally stimulating activities may contribute to the maintenance of cognitive abilities and even lead to acquiring cognitive gains.4 It is important to identify ways to induce cognitive improvements in older age, especially considering that the population of the United States (U.S.) is aging rapidly, with the number of people age 65 and older expected to increase to almost 84 million by 2050.5 This suggests that there will likely be a rapid escalation in the number of older individuals living with age-related cognitive impairment. It is currently estimated that there are 5.5 million people in the U.S. who have been diagnosed with Alzheimer’s disease,6 which is one of the most common forms of dementia.7 Thus, research aimed at helping older adults maintain good cognitive functioning is highly needed. Due to space limitations, this article is not meant to include all of the available research in this area; it contains mainly supporting evidence on the effects of video game use among older adults. Some opposing evidence is briefly mentioned when covering whether the skills acquired during video game training transfer to non-practiced tasks (which is a particularly controversial topic with ample mixed evidence).",
"title": ""
},
{
"docid": "d49bdbd1d97d663ac1b9db9cb2c28fff",
"text": "BACKGROUND\nPlantar fasciitis (PF) is reported in different sports mainly in running and soccer athletes. Purpose of this study is to conduct a systematic review of published literature concerning the diagnosis and treatment of PF in both recreational and élite athletes. The review was conducted and reported in accordance with the PRISMA statement.\n\n\nMETHODS\nThe following electronic databases were searched: PubMed, Cochrane Library and Scopus. As far as PF diagnosis, we investigated the electronic databases from January 2006 to June 2016, whereas in considering treatments all data in literature were investigated.\n\n\nRESULTS\nFor both diagnosis and treatment, 17 studies matched inclusion criteria. The results have highlighted that the most frequently used diagnostic techniques were Ultrasonography and Magnetic Resonance Imaging. Conventional, complementary, and alternative treatment approaches were assessed.\n\n\nCONCLUSIONS\nIn reviewing literature, we were unable to find any specific diagnostic algorithm for PF in athletes, due to the fact that no different diagnostic strategies were used for athletes and non-athletes. As for treatment, a few literature data are available and it makes difficult to suggest practice guidelines. Specific studies are necessary to define the best treatment algorithm for both recreational and élite athletes.\n\n\nLEVEL OF EVIDENCE\nIb.",
"title": ""
},
{
"docid": "447bfee37117b77534abe2cf6cfd8a17",
"text": "Detailed characterization of the cell types in the human brain requires scalable experimental approaches to examine multiple aspects of the molecular state of individual cells, as well as computational integration of the data to produce unified cell-state annotations. Here we report improved high-throughput methods for single-nucleus droplet-based sequencing (snDrop-seq) and single-cell transposome hypersensitive site sequencing (scTHS-seq). We used each method to acquire nuclear transcriptomic and DNA accessibility maps for >60,000 single cells from human adult visual cortex, frontal cortex, and cerebellum. Integration of these data revealed regulatory elements and transcription factors that underlie cell-type distinctions, providing a basis for the study of complex processes in the brain, such as genetic programs that coordinate adult remyelination. We also mapped disease-associated risk variants to specific cellular populations, which provided insights into normal and pathogenic cellular processes in the human brain. This integrative multi-omics approach permits more detailed single-cell interrogation of complex organs and tissues.",
"title": ""
}
] |
scidocsrr
|
c13629addc21879522abca1b5dd0214d
|
Bit-width reduction and customized register for low cost convolutional neural network accelerator
|
[
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
},
{
"docid": "d716725f2a5d28667a0746b31669bbb7",
"text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"title": ""
},
{
"docid": "9d60842315ad481ac55755160a581d74",
"text": "This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.",
"title": ""
}
] |
[
{
"docid": "db4362b293ccf3b950814aa65f5639a3",
"text": "Lean and agile principles are two most buzzing words around the corporate in the past few decades. The industrial sectors throughout the world are upgrading to these principles to enhance their performance lowering their operating costs. Though, these principles have been proven to be efficient in handling supply chains, a more robust strategy is developed inheriting the salient features of both lean and agile principles. The synergy of the two principles, leagile, takes advantage of leanness by eliminating non-value added time and agility by additional reduction of value-added time via production technology breakthroughs. Though, all the three are directly focused on core competency, the suitability of each of them must be evaluated before implementation. The objective of this study is to perform a comparative analysis of the agile, lean and leagile supply chain which specifies the competitive features of the three strategies and their competency. In addition to this, different frameworks of leagile supply chain are presented suitable for different manufacturing industry. Key wordssupply chain; lean; agile; le-agile; decoupling",
"title": ""
},
{
"docid": "46cd71806e85374c36bc77ea28293ecb",
"text": "In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.",
"title": ""
},
{
"docid": "a241ca85048e30c48acd532bce1bf2ca",
"text": "This paper addresses the challenge of establlishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with Regionlets object detection framework. It achieved 46.1% mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL VOC 2010 dataset, which dramatically improves the originalRegionlets approach without DNPs.",
"title": ""
},
{
"docid": "9421ae8a5d90707dce15fb3940abf9f4",
"text": "PURPOSE\nNuclear magnetic resonance (NMR) spectroscopy has been used to quantify lipid wax, cholesterol ester terpenoid and glyceride composition, saturation, oxidation, and CH₂ and CH₃ moiety distribution. This tool was used to measure changes in human meibum composition with meibomian gland dysfunction (MGD).\n\n\nMETHODS\n(1)H-NMR spectra of meibum from 39 donors with meibomian gland dysfunction (Md) were compared to meibum from 33 normal donors (Mn).\n\n\nRESULTS\nPrincipal component analysis (PCA) was applied to the CH₂/CH₃ regions of a set of training NMR spectra of human meibum. PCA discriminated between Mn and Md with an accuracy of 86%. There was a bias toward more accurately predicting normal samples (92%) compared with predicting MGD samples (78%). When the NMR spectra of Md were compared with those of Mn, three statistically significant decreases were observed in the relative amounts of CH₃ moieties at 1.26 ppm, the products of lipid oxidation above 7 ppm, and the =CH moieties at 5.2 ppm associated with terpenoids.\n\n\nCONCLUSIONS\nLoss of the terpenoids could be deleterious to meibum since they exhibit a plethora of mostly positive biological functions and could account for the lower level of cholesterol esters observed in Md compared with Mn. All three changes could account for the higher degree of lipid order of Md compared with age-matched Mn. In addition to the power of NMR spectroscopy to detect differences in the composition of meibum, it is promising that NMR can be used as a diagnostic tool.",
"title": ""
},
{
"docid": "4702fceea318c326856cc2a7ae553e1f",
"text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.",
"title": ""
},
{
"docid": "b7d20190bdb3ef25110b58d87d7e5bf8",
"text": "Field of soft robotics has been widely researched. Modularization of soft robots is one of the effort to expand the field. In this paper, we introduce a magnet connection for modularized soft units which were introduced in our previous research. The magnet connector was designed with off the shelf magnets. Thanks to the magnet connection, it was simpler and more intuitive than the connection method that we used in previous research. Connecting strength of the magnet connection and bending performance of a soft bending actuator assembled with the units were tested. Connecting strength and air leakage prevention of the connector was affordable in a range of actuating pneumatic pressure. We hope that this magnet connector enables modularized soft units being used as a daily item in the future.",
"title": ""
},
{
"docid": "5108dd1dba48ce0369568e30dd20ca21",
"text": "In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words1. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering. We provide baselines in two different environments: one where models are trained to select the correct next response from a list of candidate responses, and one where models are trained to maximize the loglikelihood of a generated utterance conditioned on the context of the conversation. These are both evaluated on a recall task that we call next utterance classification (NUC), and using vector-based metrics that capture the topicality of the responses. We observe that current end-to-end models are 1. This work is an extension of a paper appearing in SIGDIAL (Lowe et al., 2015). This paper further includes results on generative dialogue models, more extensive evaluation of the retrieval models using vector-based generative metrics, and a qualitative examination of responses from the generative models and classification errors made by the Dual Encoder model. Experiments are performed on a new version of the corpus, the Ubuntu Dialogue Corpus v2, which is publicly available: https://github.com/rkadlec/ubuntu-ranking-dataset-creator. The early dataset has been updated to add features and fix bugs, which are detailed in Section 3. c ©2017 Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlinn, Chia-Wei Liu and Joelle Pineau This is an open-access article distributed under the terms of a Creative Commons Attribution License (http ://creativecommons.org/licenses/by/3.0/). LOWE, POW, SERBAN, CHARLINN, LIU AND PINEAU unable to completely solve these tasks; thus, we provide a qualitative error analysis to determine the primary causes of error for end-to-end models evaluated on NUC, and examine sample utterances from the generative models. As a result of this analysis, we suggest some promising directions for future research on the Ubuntu Dialogue Corpus, which can also be applied to end-to-end dialogue systems in general.",
"title": ""
},
{
"docid": "070dea42ba7e8bd1201c8365423351f5",
"text": "This paper presents a comparative evaluation of metrics for the quantification of speech rhythm, comparing pairwise variability indices (nPVI-V and rPVI-C) and interval measures (DV, DC, %V), together with rate-normalised interval measures (VarcoV and VarcoC). First, we examined how well these metrics discriminated ‘‘stress-timed’’ English and Dutch and ‘‘syllable-timed’’ Spanish and French. Metrics of interval standard deviation such as DV and DC were strongly influenced by speech rate, but rate-normalised metrics of vocalic interval variation, VarcoV and nPVI-V, were shown to discriminate between hypothesised ‘‘rhythm classes’’, as did %V, an index of the relative duration of vocalic and consonantal intervals. Second, we applied these metrics to quantifying the influence of first language on second language rhythm, with the expectation that speakers switching ‘‘rhythm classes’’ should show rhythm scores different from both their native and target languages. VarcoV offered the most discriminative analysis in this part of the study, with %V also suggesting insights into the process of accommodation to second language rhythm. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e4a3065209c9dde50267358cbe6829b7",
"text": "OBJECTIVES\nWith the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents.\n\n\nMETHODS\nThis paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain.\n\n\nRESULTS\nText mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail.\n\n\nCONCLUSIONS\nText mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.",
"title": ""
},
{
"docid": "a20302dfa51ad50db7d67526f9390743",
"text": "Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance; we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "61e2d463abf710085ad3e26c8cd3d0a2",
"text": "Today, the Internet of Things (IoT) comprises vertically oriented platforms for things. Developers who want to use them need to negotiate access individually and adapt to the platform-specific API and information models. Having to perform these actions for each platform often outweighs the possible gains from adapting applications to multiple platforms. This fragmentation of the IoT and the missing interoperability result in high entry barriers for developers and prevent the emergence of broadly accepted IoT ecosystems. The BIG IoT (Bridging the Interoperability Gap of the IoT) project aims to ignite an IoT ecosystem as part of the European Platforms Initiative. As part of the project, researchers have devised an IoT ecosystem architecture. It employs five interoperability patterns that enable cross-platform interoperability and can help establish successful IoT ecosystems.",
"title": ""
},
{
"docid": "75b654084c7205b209d41a33b9bc03b9",
"text": "The aims of the study were to evaluate the per- and post-operative complications and outcomes after cystocele repair with transobturator mesh. A retrospective continuous series study was conducted over a period of 3 years. Clinical evaluation was up to 1 year with additional telephonic interview performed after 34 months on average. When stress urinary incontinence (SUI) was associated with the cystocele, it was treated with the same mesh. One hundred twenty-three patients were treated for cystocele. Per-operative complications occurred in six patients. After 1 year, erosion rate was 6.5%, and only three cystoceles recurred. After treatment of SUI with the same mesh, 87.7% restored continence. Overall patient’s satisfaction rate was 93.5%. Treatment of cystocele using transobturator four arms mesh appears to reduce the risk of recurrence at 1 year, along with high rate of patient’s satisfaction. The transobturator path of the prosthesis arms seems devoid of serious per- and post-operative risks and allows restoring continence when SUI is present.",
"title": ""
},
{
"docid": "1056326e07199296b63d1ea677e2f295",
"text": "BACKGROUND\nDepression is common and frequently undiagnosed among college students. Social networking sites are popular among college students and can include displayed depression references. The purpose of this study was to evaluate college students' Facebook disclosures that met DSM criteria for a depression symptom or a major depressive episode (MDE).\n\n\nMETHODS\nWe selected public Facebook profiles from sophomore and junior undergraduates and evaluated personally written text: \"status updates.\" We applied DSM criteria to 1-year status updates from each profile to determine prevalence of displayed depression symptoms and MDE criteria. Negative binomial regression analysis was used to model the association between depression disclosures and demographics or Facebook use characteristics.\n\n\nRESULTS\nTwo hundred profiles were evaluated, and profile owners were 43.5% female with a mean age of 20 years. Overall, 25% of profiles displayed depressive symptoms and 2.5% met criteria for MDE. Profile owners were more likely to reference depression, if they averaged at least one online response from their friends to a status update disclosing depressive symptoms (exp(B) = 2.1, P <.001), or if they used Facebook more frequently (P <.001).\n\n\nCONCLUSION\nCollege students commonly display symptoms consistent with depression on Facebook. Our findings suggest that those who receive online reinforcement from their friends are more likely to discuss their depressive symptoms publicly on Facebook. Given the frequency of depression symptom displays on public profiles, social networking sites could be an innovative avenue for combating stigma surrounding mental health conditions or for identifying students at risk for depression.",
"title": ""
},
{
"docid": "ea041a1df42906b0d5a3644ae8ba933b",
"text": "In recent years, program verifiers and interactive theorem provers have become more powerful and more suitable for verifying large programs or proofs. This has demonstrated the need for improving the user experience of these tools to increase productivity and to make them more accessible to nonexperts. This paper presents an integrated development environment for Dafny—a programming language, verifier, and proof assistant—that addresses issues present in most state-of-the-art verifiers: low responsiveness and lack of support for understanding non-obvious verification failures. The paper demonstrates several new features that move the state-of-the-art closer towards a verification environment that can provide verification feedback as the user types and can present more helpful information about the program or failed verifications in a demand-driven and unobtrusive way.",
"title": ""
},
{
"docid": "280688093cc5d39afc93d92e90351819",
"text": "Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinelLSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.1",
"title": ""
},
{
"docid": "607cd26b9c51b5b52d15087d0e6662cb",
"text": "Pseudo-NMOS level-shifters consume large static current making them unsuitable for portable devices implemented with HV CMOS. Dynamic level-shifters help reduce power consumption. To reduce on-current to a minimum (sub-nanoamp), modifications are proposed to existing pseudo-NMOS and dynamic level-shifter circuits. A low power three transistor static level-shifter design with a resistive load is also presented.",
"title": ""
},
{
"docid": "5d88f5a18d3e4961eee6e9ed6db62817",
"text": "“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.",
"title": ""
},
{
"docid": "0e6fd08318cf94ea683892d737ae645a",
"text": "We present simulations and demonstrate experimentally a new concept in winding a planar induction heater. The winding results in minimal ac magnetic field below the plane of the heater, while concentrating the flux above. Ferrites and other types of magnetic shielding are typically not required. The concept of a one-sided ac field can generalized to other geometries as well.",
"title": ""
},
{
"docid": "29df932ae4fad0b70b909c5c8f72dad3",
"text": "Recently, non-fixed camera-based free viewpoint sports video synthesis has become very popular. Camera calibration is an indispensable step in free viewpoint video synthesis, and the calibration has to be done frame by frame for a non-fixed camera. Thus, calibration speed is of great significance in real-time application. In this paper, a fast self-calibration method for a non-fixed camera is proposed to estimate the homography matrix between a camera image and a soccer field model. As far as we know, it is the first time to propose constructing feature vectors by analyzing crossing points of field lines in both camera image and field model. Therefore, different from previous methods that evaluate all the possible homography matrices and select the best one, our proposed method only evaluates a small number of homography matrices based on the matching result of the constructed feature vectors. Experimental results show that the proposed method is much faster than other methods with only a slight loss of calibration accuracy that is negligible in final synthesized videos.",
"title": ""
},
{
"docid": "f9692d0410cb97fd9c2ecf6f7b043b9f",
"text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c7068cc9b5cef491a342fd7731dd8793
|
Scale-invariant learning and convolutional networks
|
[
{
"docid": "bcb756857adef42264eab0f1361f8be7",
"text": "The problem of multi-class boosting is considered. A new fra mework, based on multi-dimensional codewords and predictors is introduced . The optimal set of codewords is derived, and a margin enforcing loss proposed. The resulting risk is minimized by gradient descent on a multidimensional functi onal space. Two algorithms are proposed: 1) CD-MCBoost, based on coordinate des cent, updates one predictor component at a time, 2) GD-MCBoost, based on gradi ent descent, updates all components jointly. The algorithms differ in the w ak learners that they support but are both shown to be 1) Bayes consistent, 2) margi n enforcing, and 3) convergent to the global minimum of the risk. They also red uce to AdaBoost when there are only two classes. Experiments show that both m et ods outperform previous multiclass boosting approaches on a number of data sets.",
"title": ""
}
] |
[
{
"docid": "69cbe1970732eeb5546decc250941179",
"text": "There is confusion and misunderstanding about the concepts of knowledge translation, knowledge transfer, knowledge exchange, research utilization, implementation, diffusion, and dissemination. We review the terms and definitions used to describe the concept of moving knowledge into action. We also offer a conceptual framework for thinking about the process and integrate the roles of knowledge creation and knowledge application. The implications of knowledge translation for continuing education in the health professions include the need to base continuing education on the best available knowledge, the use of educational and other transfer strategies that are known to be effective, and the value of learning about planned-action theories to be better able to understand and influence change in practice settings.",
"title": ""
},
{
"docid": "0df26f2f40e052cde72048b7538548c3",
"text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.",
"title": ""
},
{
"docid": "77a42190d5acf347920c11d3a3186f4f",
"text": "Changes in retinal vessel diameter are an important sign of diseases such as hypertension, arteriosclerosis and diabetes mellitus. Obtaining precise measurements of vascular widths is a critical and demanding process in automated retinal image analysis as the typical vessel is only a few pixels wide. This paper presents an algorithm to measure the vessel diameter to subpixel accuracy. The diameter measurement is based on a two-dimensional difference of Gaussian model, which is optimized to fit a two-dimensional intensity vessel segment. The performance of the method is evaluated against Brinchmann-Hansen's half height, Gregson's rectangular profile and Zhou's Gaussian model. Results from 100 sample profiles show that the presented algorithm is over 30% more precise than the compared techniques and is accurate to a third of a pixel.",
"title": ""
},
{
"docid": "2a600bc7d6e35335e1514597aa4c7a79",
"text": "Since the 2000s, Business Process Management (BPM) has evolved into a comprehensively studied discipline that goes beyond the boundaries of particular business processes. By also affecting enterprise-wide capabilities (such as an organisational culture and structure that support a processoriented way of working), BPM can now correctly be called Business Process Orientation (BPO). Meanwhile, various maturity models have been developed to help organisations adopt a processoriented way of working based on step-by-step best practices. The present article reports on a case study in which the process portfolio of an organisation is assessed by different maturity models that each cover a different set of process-oriented capabilities. The purpose is to reflect on how business process maturity is currently measured, and to explore relevant considerations for practitioners, scholars and maturity model designers. Therefore, we investigate a possible difference in maturity scores that are obtained based on model-related characteristics (e.g. capabilities, scale and calculation technique) and respondent-related characteristics (e.g. organisational function). For instance, based on an experimental design, the original maturity scores are recalculated for different maturity scales and different calculation techniques. Follow-up research can broaden our experiment from multiple maturity models in a single case to multiple maturity models in multiple cases.",
"title": ""
},
{
"docid": "eda884b2f55f49bb6bfbe2c8bbc35be5",
"text": "Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters. We prove that computation of this model is NP-hard. For RESCU, we propose an approximative solution that shows high accuracy with respect to our relevance model. Thorough experiments on synthetic and real world data show that RESCU successfully reduces the result to manageable sizes. It reliably achieves top clustering quality while competing approaches show greatly varying performance.",
"title": ""
},
{
"docid": "328adffce36e79e5cdcdd3db75e2b35c",
"text": "Traditional link prediction techniques primarily focus on the effect of potential linkages on the local network neighborhood or the paths between nodes. In this paper, we study the problem of link prediction in networks where instances can simultaneously belong to multiple communities, engendering different types of collaborations. Links in these networks arise from heterogeneous causes, limiting the performance of predictors that treat all links homogeneously. To solve this problem, we introduce a new link prediction framework, Link Prediction using Social Features (LPSF), which weights the network using a similarity function based on features extracted from patterns of prominent interactions across the network.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "4e6ff17d33aceaa63ec156fc90aed2ce",
"text": "Objective:\nThe aim of the present study was to translate and cross-culturally adapt the Functional Status Score for the intensive care unit (FSS-ICU) into Brazilian Portuguese.\n\n\nMethods:\nThis study consisted of the following steps: translation (performed by two independent translators), synthesis of the initial translation, back-translation (by two independent translators who were unaware of the original FSS-ICU), and testing to evaluate the target audience's understanding. An Expert Committee supervised all steps and was responsible for the modifications made throughout the process and the final translated version.\n\n\nResults:\nThe testing phase included two experienced physiotherapists who assessed a total of 30 critical care patients (mean FSS-ICU score = 25 ± 6). As the physiotherapists did not report any uncertainties or problems with interpretation affecting their performance, no additional adjustments were made to the Brazilian Portuguese version after the testing phase. Good interobserver reliability between the two assessors was obtained for each of the 5 FSS-ICU tasks and for the total FSS-ICU score (intraclass correlation coefficients ranged from 0.88 to 0.91).\n\n\nConclusion:\nThe adapted version of the FSS-ICU in Brazilian Portuguese was easy to understand and apply in an intensive care unit environment.",
"title": ""
},
{
"docid": "7b4567b9f32795b267f2fb2d39bbee51",
"text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.",
"title": ""
},
{
"docid": "6fd84345b0399a0d59d80fb40829eee2",
"text": "This paper describes a method based on a sequenceto-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.",
"title": ""
},
{
"docid": "ef787cfc1b00c9d05ec9293ff802f172",
"text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.",
"title": ""
},
{
"docid": "2f9a5a9b31830db2708f63daa1d182ea",
"text": "PURPOSE\nTo report a case of autoenucleation associated with contralateral field defect.\n\n\nDESIGN\nObservational case report.\n\n\nMETHODS\nA 36-year-old man was referred to the emergency ward with his right eye attached to a fork. His history revealed drug abuse with ecstasy.\n\n\nRESULTS\nVisual field examination revealed a temporal hemianopia on the left eye. There was no change in the visual field defect after intravenous steroid, two months after initial presentation.\n\n\nCONCLUSIONS\nContralateral visual field defect may be associated with autoenucleation. A visual field test is recommended in all cases with traumatic enucleation.",
"title": ""
},
{
"docid": "ad33994b26dad74e6983c860c0986504",
"text": "Accurate software effort estimation has been a challenge for many software practitioners and project managers. Underestimation leads to disruption in the project's estimated cost and delivery. On the other hand, overestimation causes outbidding and financial losses in business. Many software estimation models exist; however, none have been proven to be the best in all situations. In this paper, a decision tree forest (DTF) model is compared to a traditional decision tree (DT) model, as well as a multiple linear regression model (MLR). The evaluation was conducted using ISBSG and Desharnais industrial datasets. Results show that the DTF model is competitive and can be used as an alternative in software effort prediction.",
"title": ""
},
{
"docid": "d685e84f8ddc55f2391a9feffc88889f",
"text": "Little is known about how Agile developers and UX designers integrate their work on a day-to-day basis. While accounts in the literature attempt to integrate Agile development and UX design by combining their processes and tools, the contradicting claims found in the accounts complicate extracting advice from such accounts. This paper reports on three ethnographically-informed field studies of the day-today practice of developers and designers in organisational settings. Our results show that integration is achieved in practice through (1) mutual awareness, (2) expectations about acceptable behaviour, (3) negotiating progress and (4) engaging with each other. Successful integration relies on practices that support and maintain these four aspects in the day-to-day work of developers and designers.",
"title": ""
},
{
"docid": "66f1279585c6d1a0a388faa91bd25c62",
"text": "Our research project is to design a readout IC for an ultrasonic transducer consisting of a matrix of more than 2000 elements. The IC and the matrix transducer will be put into the tip of a transesophageal probe for 3D echocardiography. A key building block of the readout IC, a programmable analog delay line, is presented in this paper. It is based on the time-interleaved sample-and-hold (S/H) principle. Compared with conventional analog delay lines, this design is simple, accurate and flexible. A prototype has been fabricated in a standard 0.35µm CMOS technology. Measurement results showing its functionality are presented.",
"title": ""
},
{
"docid": "529a1f32def5f2793658ea4329dbe8d3",
"text": "OBJECTIVE\nTo compare the performance of a power wheelchair with stair-climbing capability (TopChair) and a conventional power wheelchair (Storm3).\n\n\nDESIGN\nA single-center, open-label study.\n\n\nSETTING\nA physical medicine and rehabilitation hospital.\n\n\nPARTICIPANTS\nPatients (N=25) who required power wheelchairs because of severe impairments affecting the upper and lower limbs.\n\n\nINTERVENTIONS\nIndoor and outdoor driving trials with both devices. Curb-clearing and stair-climbing with TopChair.\n\n\nMAIN OUTCOME MEASURES\nTrial duration and Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) tool; number of failures during driving trials and ability to climb curbs and stairs.\n\n\nRESULTS\nAll 25 participants successfully completed the outdoor and indoor trials with both wheelchairs. Although differences in times to trial completion were statistically significant, they were less than 10%. QUEST scores were significantly better with the Storm3 than the TopChair for weight (P=.001), dimension (P=.006), and effectiveness (P=.04). Of the 25 participants, 23 cleared a 20-cm curb without help, and 20 climbed up and down 6 steps. Most participants felt these specific capabilities of the TopChair--for example, curb clearing and stair climbing-were easy to use (22/25 for curb, 21/25 for stairs) and helpful (24/25 and 23/25). A few participants felt insecure (4/25 and 6/25, respectively).\n\n\nCONCLUSIONS\nThe TopChair is a promising mobility device that enables stair and curb climbing and warrants further study.",
"title": ""
},
{
"docid": "3b78988b74c2e42827c9e75e37d2223e",
"text": "This paper addresses how to construct a RBAC-compatible attribute-based encryption (ABE) for secure cloud storage, which provides a user-friendly and easy-to-manage security mechanism without user intervention. Similar to role hierarchy in RBAC, attribute lattice introduced into ABE is used to define a seniority relation among all values of an attribute, whereby a user holding the senior attribute values acquires permissions of their juniors. Based on these notations, we present a new ABE scheme called Attribute-Based Encryption with Attribute Lattice (ABE-AL) that provides an efficient approach to implement comparison operations between attribute values on a poset derived from attribute lattice. By using bilinear groups of composite order, we propose a practical construction of ABE-AL based on forward and backward derivation functions. Compared with prior solutions, our scheme offers a compact policy representation solution, which can significantly reduce the size of privatekeys and ciphertexts. Furthermore, our solution provides a richer expressive power of access policies to facilitate flexible access control for ABE scheme.",
"title": ""
},
{
"docid": "bf1597a417aee9b080f738c7ef2bdffe",
"text": "BACKGROUND\nThe increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging.\n\n\nMETHODS\nWe propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions.\n\n\nRESULTS\nThis research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9.\n\n\nCONCLUSIONS\nThe obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.",
"title": ""
},
{
"docid": "03277ef81159827a097c73cd24f8b5c0",
"text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.",
"title": ""
},
{
"docid": "2a45fb350731967591487e0b6c9a820c",
"text": "In this chapter, we report the first experimental explorations of reinforcement learning in Tourette syndrome, realized by our team in the last few years. This report will be preceded by an introduction aimed to provide the reader with the state of the art of the knowledge concerning the neural bases of reinforcement learning at the moment of these studies and the scientific rationale beyond them. In short, reinforcement learning is learning by trial and error to maximize rewards and minimize punishments. This decision-making and learning process implicates the dopaminergic system projecting to the frontal cortex-basal ganglia circuits. A large body of evidence suggests that the dysfunction of the same neural systems is implicated in the pathophysiology of Tourette syndrome. Our results show that Tourette condition, as well as the most common pharmacological treatments (dopamine antagonists), affects reinforcement learning performance in these patients. Specifically, the results suggest a deficit in negative reinforcement learning, possibly underpinned by a functional hyperdopaminergia, which could explain the persistence of tics, despite their evident inadaptive (negative) value. This idea, together with the implications of these results in Tourette therapy and the future perspectives, is discussed in Section 4 of this chapter.",
"title": ""
}
] |
scidocsrr
|
272287a6547a2a9834e3aea9b0b75009
|
Efficient Privacy-Preserving Ciphertext-Policy Attribute Based-Encryption and Broadcast Encryption
|
[
{
"docid": "4ed64bba175a8c1ff5a6c277c62fa9ac",
"text": "In a ciphertext policy attribute based encryption system, a user’s private key is associated with a set of attributes (describing the user) and an encrypted ciphertext will specify an access policy over attributes. A user will be able to decrypt if and only if his attributes satisfy the ciphertext’s policy. In this work, we present the first construction of a ciphertext-policy attribute based encryption scheme having a security proof based on a number theoretic assumption and supporting advanced access structures. Previous CP-ABE systems could either support only very limited access structures or had a proof of security only in the generic group model. Our construction can support access structures which can be represented by a bounded size access tree with threshold gates as its nodes. The bound on the size of the access trees is chosen at the time of the system setup. Our security proof is based on the standard Decisional Bilinear Diffie-Hellman assumption.",
"title": ""
}
] |
[
{
"docid": "a16ced3651034a33a926fe20b9093af8",
"text": "Most existing automated debugging techniques focus on reducing the amount of code to be inspected and tend to ignore an important component of software failures: the inputs that cause the failure to manifest. In this paper, we present a new technique based on dynamic tainting for automatically identifying subsets of a program's inputs that are relevant to a failure. The technique (1) marks program inputs when they enter the application, (2) tracks them as they propagate during execution, and (3) identifies, for an observed failure, the subset of inputs that are potentially relevant for debugging that failure. To investigate feasibility and usefulness of our technique, we created a prototype tool, PENUMBRA, and used it to evaluate our technique on several failures in real programs. Our results are promising, as they show that PENUMBRA can point developers to inputs that are actually relevant for investigating a failure and can be more practical than existing alternative approaches.",
"title": ""
},
{
"docid": "b163fb3faa31f6db35599d32d7946523",
"text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.",
"title": ""
},
{
"docid": "7931fa9541efa9a006a030655c59c5f4",
"text": "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.",
"title": ""
},
{
"docid": "618496f6e0b1da51e1e2c81d72c4a6f1",
"text": "Paid employment within clinical setting, such as externships for undergraduate student, are used locally and globally to better prepare and retain new graduates for actual practice and facilitate their transition into becoming registered nurses. However, the influence of paid employment on the post-registration experience of such nurses remains unclear. Through the use of narrative inquiry, this study explores how the experience of pre-registration paid employment shapes the post-registration experience of newly graduated registered nurses. Repeated individual interviews were conducted with 18 new graduates, and focus group interviews were conducted with 11 preceptors and 10 stakeholders recruited from 8 public hospitals in Hong Kong. The data were subjected to narrative and paradigmatic analyses. Taken-for-granted assumptions about the knowledge and performance of graduates who worked in the same unit for their undergraduate paid work experience were uncovered. These assumptions affected the quantity and quality of support and time that other senior nurses provided to these graduates for their further development into competent nurses and patient advocates, which could have implications for patient safety. It is our hope that this narrative inquiry will heighten awareness of taken-for-granted assumptions, so as to help graduates transition to their new role and provide quality patient care.",
"title": ""
},
{
"docid": "9cf48e5fa2cee6350ac31f236696f717",
"text": "Komatiites are rare ultramafic lavas that were produced most commonly during the Archean and Early Proterozoic and less frequently in the Phanerozoic. These magmas provide a record of the thermal and chemical characteristics of the upper mantle through time. The most widely cited interpretation is that komatiites were produced in a plume environment and record high mantle temperatures and deep melting pressures. The decline in their abundance from the Archean to the Phanerozoic has been interpreted as primary evidence for secular cooling (up to 500‡C) of the mantle. In the last decade new evidence from petrology, geochemistry and field investigations has reopened the question of the conditions of mantle melting preserved by komatiites. An alternative proposal has been rekindled: that komatiites are produced by hydrous melting at shallow mantle depths in a subduction environment. This alternative interpretation predicts that the Archean mantle was only slightly (V100‡C) hotter than at present and implicates subduction as a process that operated in the Archean. Many thermal evolution and chemical differentiation models of the young Earth use the plume origin of komatiites as a central theme in their model. Therefore, this controversy over the mechanism of komatiite generation has the potential to modify widely accepted views of the Archean Earth and its subsequent evolution. This paper briefly reviews some of the pros and cons of the plume and subduction zone models and recounts other hypotheses that have been proposed for komatiites. We suggest critical tests that will improve our understanding of komatiites and allow us to better integrate the story recorded in komatiites into our view of early Earth evolution. 6 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "e984ca3539c2ea097885771e52bdc131",
"text": "This study proposes and tests a novel theoretical mechanism to explain increased selfdisclosure intimacy in text-based computer-mediated communication (CMC) versus face-to-face (FtF) interactions. On the basis of joint effects of perception intensification processes in CMC and the disclosure reciprocity norm, the authors predict a perceptionbehavior intensification effect, according to which people perceive partners’ initial disclosures as more intimate in CMC than FtF and, consequently, reciprocate with more intimate disclosures of their own. An experiment compares disclosure reciprocity in textbased CMC and FtF conversations, in which participants interacted with a confederate who made either intimate or nonintimate disclosures across the two communication media. The utterances generated by the participants are coded for disclosure frequency and intimacy. Consistent with the proposed perception-behavior intensification effect, CMC participants perceive the confederate’s disclosures as more intimate, and, importantly, reciprocate with more intimate disclosures than FtF participants do.",
"title": ""
},
{
"docid": "535934dc80c666e0d10651f024560d12",
"text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …",
"title": ""
},
{
"docid": "5398b76e55bce3c8e2c1cd89403b8bad",
"text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that",
"title": ""
},
{
"docid": "617fa45a68d607a4cb169b1446aa94bd",
"text": "The Draganflyer is a radio-controlled helicopter. It is powered by 4 rotors and is capable of motion in air in 6 degrees of freedom and of stable hovering. For flying it requires a high degree of skill, with the operator continually making small adjustments. In this paper, we do a theoretical analysis of the dynamics of the Draganflyer in order to develop a model of it from which we can develop a computer control system for stable hovering and indoor flight.",
"title": ""
},
{
"docid": "7e682f98ee6323cd257fda07504cba20",
"text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods",
"title": ""
},
{
"docid": "5809c27155986612b0e4a9ef48b3b930",
"text": "Using the same technologies for both work and private life is an intensifying phenomenon. Mostly driven by the availability of consumer IT in the marketplace, individuals—more often than not—are tempted to use privately-owned IT rather than enterprise IT in order to get their job done. However, this dual-use of technologies comes at a price. It intensifies the blurring of the boundaries between work and private life—a development in stark contrast to the widely spread desire of employees to segment more clearly between their two lives. If employees cannot follow their segmentation preference, it is proposed that this misfit will result in work-to-life conflict (WtLC). This paper investigates the relationship between organizational encouragement for dual use and WtLC. Via a quantitative survey, we find a significant relationship between the two concepts. In line with boundary theory, the effect is stronger for people that strive for work-life segmentation.",
"title": ""
},
{
"docid": "b72e6a63681b94a001460eb80dece2e5",
"text": "Fully integrated CMOS power amplifiers (PAs) with parallel power-combining transformer are presented. For the high power CMOS PA design, two types of transformers, series-combining and parallel-combining, are fully analyzed and compared in detail to show the parasitic resistance and the turn ratio as the limiting factor of power combining. Based on the analysis, two kinds of parallel-combining transformers, a two-primary with a 1:2 turn ratio and a three-primary with a 1:2 turn ratio, are incorporated into the design of fully-integrated CMOS PAs in a standard 0.18-mum CMOS process. The PA with a two-primary transformer delivers 31.2 dBm of output power with 41% of power-added efficiency (PAE), and the PA with a three-primary transformer achieves 32 dBm of output power with 30% of PAE at 1.8 GHz with a 3.3-V power supply.",
"title": ""
},
{
"docid": "45b082ddf4a813d6b95098ef5592bafc",
"text": "Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning. The problem is typically addressed using streaming algorithms which can process very large data using limited storage. Today’s streaming algorithms, however, cannot exploit patterns in their input to improve performance. We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains.",
"title": ""
},
{
"docid": "9c67acad433540afc720ed2cab5ed1a2",
"text": "Classical approaches to performance prediction rely on two, typically antithetic, techniques: Machine Learning (ML) and Analytical Modeling (AM). ML takes a black box approach, whose accuracy strongly depends on the representativeness of the dataset used during the initial training phase. Specifically, it can achieve very good accuracy in areas of the features' space that have been sufficiently explored during the training process. Conversely, AM techniques require no or minimal training, hence exhibiting the potential for supporting prompt instantiation of the performance model of the target system. However, in order to ensure their tractability, they typically rely on a set of simplifying assumptions. Consequently, AM's accuracy can be seriously challenged in scenarios (e.g., workload conditions) in which such assumptions are not matched.\n In this paper we explore several hybrid/gray box techniques that exploit AM and ML in synergy in order to get the best of the two worlds. We evaluate the proposed techniques in case studies targeting two complex and widely adopted middleware systems: a NoSQL distributed key-value store and a Total Order Broadcast (TOB) service.",
"title": ""
},
{
"docid": "2e09cce98d095904dd486a99b955cea0",
"text": "We construct a large scale of causal knowledge in term of Fabula elements by extracting causal links from existing common sense ontology ConceptNet5. We design a Constrained Monte Carlo Tree Search (cMCTS) algorithm that allows users to specify positive and negative concepts to appear in the generated stories. cMCTS can find a believable causal story plot. We show the merits by experiments and discuss the remedy strategies in cMCTS that may generate incoherent causal plots. keywords: Fabula elements, causal story plots, constrained Monte Carlo Tree Search, user preference, believable story generation",
"title": ""
},
{
"docid": "5028d250c60a70c0ed6954581ab6cfa7",
"text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.",
"title": ""
},
{
"docid": "97ec541daef17eb4ff0772e34ee4de48",
"text": "Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.",
"title": ""
},
{
"docid": "7d74b896764837904019a0abff967065",
"text": "Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as \\bifurcation points\". At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.",
"title": ""
}
] |
scidocsrr
|
3624179dc3b2b68cfcce38e420b33040
|
Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework.
|
[
{
"docid": "6ab433155baadb12c514650f57ccaad8",
"text": "We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions. We explored recognition of facial actions from the facial action coding system (FACS), as well as recognition of fall facial expressions. Each video-frame is first scanned in real-time to detect approximately upright frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing recognition engines, including AdaBoost, support vector machines, linear discriminant analysis, as well as feature selection techniques. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training support vector machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for recognition of full facial expressions in a 7-way forced choice was 93% correct, the best performance reported so far on the Cohn-Kanade FACS-coded expression dataset. We also applied the system to fully automated facial action coding. The present system classifies 18 action units, whether they occur singly or in combination with other actions, with a mean agreement rate of 94.5% with human FACS codes in the Cohn-Kanade dataset. The outputs of the classifiers change smoothly as a function of time and thus can be used to measure facial expression dynamics.",
"title": ""
}
] |
[
{
"docid": "1b4a97df029e45e8d4cf8b8c548c420a",
"text": "Today, online social networks have become powerful tools for the spread of information. They facilitate the rapid and large-scale propagation of content and the consequences of an information -- whether it is favorable or not to someone, false or true -- can then take considerable proportions. Therefore it is essential to provide means to analyze the phenomenon of information dissemination in such networks. Many recent studies have addressed the modeling of the process of information diffusion, from a topological point of view and in a theoretical perspective, but we still know little about the factors involved in it. With the assumption that the dynamics of the spreading process at the macroscopic level is explained by interactions at microscopic level between pairs of users and the topology of their interconnections, we propose a practical solution which aims to predict the temporal dynamics of diffusion in social networks. Our approach is based on machine learning techniques and the inference of time-dependent diffusion probabilities from a multidimensional analysis of individual behaviors. Experimental results on a real dataset extracted from Twitter show the interest and effectiveness of the proposed approach as well as interesting recommendations for future investigation.",
"title": ""
},
{
"docid": "b105711c0aabde844b46c3912cf78363",
"text": "CONFLICT OF INTEREST\nnone declared.\n\n\nINTRODUCTION\nThe incidence of diabetes type 2 (diabetes mellitus type 2 - DM 2) is rapidly increasing worldwide. Physical inactivity and obesity are the major determinants of the disease. Primary prevention of DM 2 entails health monitoring of people at risk category. People with impaired glycemic control are at high risk for development of DM 2 and enter the intensive supervision program for primary and secondary prevention.\n\n\nOBJECTIVE OF THE RESEARCH\nTo evaluate the impact of metformin and lifestyle modification on glycemia and obesity in patients with prediabetes.\n\n\nPATIENTS AND METHODS\nThe study was conducted on three groups of 20 patients each (total of 60 patients) aged from 45 to 80, with an abnormal glycoregulation and prediabetes. The study did not include patients who already met the diagnostic criteria for the diagnosis of diabetes. During the study period of 6 months, one group was extensively educated on changing lifestyle (healthy nutrition and increased physical activity), the second group was treated with 500 mg metformin twice a day, while the control group was advised about diet and physical activities but different from the first two groups. At beginning of the study, all patients were measured initial levels of blood glucose, HbA1C, BMI (Body Mass Index), body weight and height and waist size. Also the same measurements were taken at the end of the conducted research, 6 months later. For the assessment of diabetes control was conducted fasting plasma glucose (FPG) test and 2 hours after a glucose load, and HbA1C.\n\n\nRESULTS\nAt the beginning of the study the average HbA1C (%) values in three different groups according to the type of intervention (lifestyle changes, metformin, control group) were as follows: (6.4 ± 0.5 mmol / l), (6.5 ± 1.2 mmol / l), (6.7 ± 0.5 mmol / l). At the end of the research, the average HbA1C values were: 6.2 ± 0.3 mmol / l, 6.33 ± 0.5 mmol / l and 6.7 ± 1.4 mmol / l. In the group of patients who received intensive training on changing lifestyle or group that was treated with metformin, the average reduction in blood glucose and HbA1C remained within the reference range and there were no criteria for the diagnosis of diabetes. Unlike the control group, a group that was well educated on changing habits decreased average body weight by 4.25 kg, BMI by 1.3 and waist size by 2.5 cm. Metformin therapy led to a reduction in the average weight of 3.83 kg, BMI of 1.33 and 3.27 for waist size. Changing lifestyle (healthy diet and increased physical activity) has led to a reduction in total body weight in 60% of patients, BMI in 65% of patients, whereas metformin therapy led to a reduction of the total body weight in 50%, BMI in 45% of patients. In the control group, the overall reduction in body weight was observed in 25%, and BMI in 15% of patients.\n\n\nCONCLUSION\nModification of lifestyle, such as diet and increased physical activity or use of metformin may improve glycemic regulation, reduce obesity and prevent or delay the onset of developing DM 2.",
"title": ""
},
{
"docid": "172a35c941407bb09c8d41953dfc6d37",
"text": "Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to launch data poisoning attacks, which has been demonstrated a big threat to single-task learning. In this paper, we provide the first study on the vulnerability of MTL. Specifically, we focus on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data. We formulate the problem of computing optimal poisoning attacks on MTRL as a bilevel program that is adaptive to arbitrary choice of target tasks and attacking tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on realworld datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.",
"title": ""
},
{
"docid": "b55eb410f2a2c7eb6be1c70146cca203",
"text": "Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for permissioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance.",
"title": ""
},
{
"docid": "b0ae3875b79f8453a3752d1e684abeaa",
"text": "This study applied a functional approach to the assessment of self-mutilative behavior (SMB) among adolescent psychiatric inpatients. On the basis of past conceptualizations of different forms of self-injurious behavior, the authors hypothesized that SMB is performed because of the automatically reinforcing (i.e., reinforced by oneself; e.g., emotion regulation) and/or socially reinforcing (i.e., reinforced by others; e.g., attention, avoidance-escape) properties associated with such behaviors. Data were collected from 108 adolescent psychiatric inpatients referred for self-injurious thoughts or behaviors. Adolescents reported engaging in SMB frequently, using multiple methods, and having an early age of onset. Moreover, the results supported the structural validity and reliability of the hypothesized functional model of SMB. Most adolescents engaged in SMB for automatic reinforcement, although a sizable portion endorsed social reinforcement functions as well. These findings have direct implications for the understanding, assessment, and treatment of SMB.",
"title": ""
},
{
"docid": "394410f85e2911eb95678472e35bb9e1",
"text": "The purpose of this article was to build a license plates recognition system with high accuracy at night. The system, based on regular PC, catches video frames which include a visible car license plate and processes them. Once a license plate is detected, its digits are recognized, and then checked against a database. The focus is on the modified algorithms to identify the individual characters. In this article, we use the template-matching method and neural net method together, and make some progress on the study before. The result showed that the accuracy is higher at night.",
"title": ""
},
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "d83062e4022f6282d7d9b99b8d239715",
"text": "Annexin A1 (ANXA1) is an endogenous protein with potent anti-inflammatory properties in the brain. Although ANXA1 has been predominantly studied for its binding to formyl peptide receptors (FPRs) on plasma membranes, little is known regarding whether this protein has an anti-inflammatory effect in the cytosol. Here, we investigated the mechanism by which the ANXA1 peptide Ac2-26 decreases high TNF-α production and IKKβ activity, which was caused by oxygen glucose deprivation/reperfusion (OGD/R)-induced neuronal conditioned medium (NCM) in microglia. We found that exogenous Ac2-26 crosses into the cytoplasm of microglia and inhibits both gene expression and protein secretion of TNF-α. Ac2-26 also causes a decrease in IKKβ protein but not IKKβ mRNA, and this effect is inverted by lysosome inhibitor NH4CL. Furthermore, we demonstrate that Ac2-26 induces IKKβ accumulation in lysosomes and that lysosomal-associated membrane protein 2A (LAMP-2A), not LC-3, is enhanced in microglia exposed to Ac2-26. We hypothesize that Ac2-26 mediates IKKβ degradation in lysosomes through chaperone-mediated autophagy (CMA). Interestingly, ANXA1 in the cytoplasm does not interact with IKKβ but with HSPB1, and Ac2-26 promotes HSPB1 binding to IKKβ. Furthermore, both ANXA1 and HSPB1 can interact with Hsc70 and LAMP-2A, but IKKβ only associates with LAMP-2A. Downregulation of HSPB1 or LAMP-2A reverses the degradation of IKKβ induced by Ac2-26. Taken together, these findings define an essential role of exogenous Ac2-26 in microglia and demonstrate that Ac2-26 is associated with HSPB1 and promotes HSPB1 binding to IKKβ, which is degraded by CMA, thereby reducing TNF-α expression.",
"title": ""
},
{
"docid": "fbbf7c30f7ebcd2b9bbc9cc7877309b1",
"text": "People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.",
"title": ""
},
{
"docid": "2b61a16b47d865197c6c735cefc8e3ec",
"text": "The present study investigated the relationship between trauma symptoms and a history of child sexual abuse, adult sexual assault, and physical abuse by a partner as an adult. While there has been some research examining the correlation between individual victimization experiences and traumatic stress, the cumulative impact of multiple victimization experiences has not been addressed. Subjects were recruited from psychological clinics and community advocacy agencies. Additionally, a nonclinical undergraduate student sample was evaluated. The results of this study indicate not only that victimization and revictimization experiences are frequent, but also that the level of trauma specific symptoms are significantly related to the number of different types of reported victimization experiences. The research and clinical implications of these findings are discussed.",
"title": ""
},
{
"docid": "33eebe279e80452aec3e2e5bd28a708d",
"text": "Context aware recommender systems go beyond the traditional personalized recommendation models by incorporating a form of situational awareness. They provide recommendations that not only correspond to a user's preference profile, but that are also tailored to a given situation or context. We consider the setting in which contextual information is represented as a subset of an item feature space describing short-term interests or needs of a user in a given situation. This contextual information can be provided by the user in the form of an explicit query, or derived implicitly.\n We propose a unified probabilistic model that integrates user profiles, item representations, and contextual information. The resulting recommendation framework computes the conditional probability of each item given the user profile and the additional context. These probabilities are used as recommendation scores for ranking items. Our model is an extension of the Latent Dirichlet Allocation (LDA) model that provides the capability for joint modeling of users, items, and the meta-data associated with contexts. Each user profile is modeled as a mixture of the latent topics. The discovered latent topics enable our system to handle missing data in item features. We demonstrate the application of our framework for article and music recommendation. In the latter case, the set of popular tags from social tagging Web sites are used for context descriptions. Our evaluation results show that considering context can help improve the quality of recommendations.",
"title": ""
},
{
"docid": "790de0f792c81b9e26676f800e766759",
"text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "7c3457a5ca761b501054e76965b41327",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "0066d03bf551e64b9b4a1595f1494347",
"text": "Visual Text Analytics has been an active area of interdisciplinary research (http://textvis.lnu.se/). This interactive tutorial is designed to give attendees an introduction to the area of information visualization, with a focus on linguistic visualization. After an introduction to the basic principles of information visualization and visual analytics, this tutorial will give an overview of the broad spectrum of linguistic and text visualization techniques, as well as their application areas [3]. This will be followed by a hands-on session that will allow participants to design their own visualizations using tools (e.g., Tableau), libraries (e.g., d3.js), or applying sketching techniques [4]. Some sample datasets will be provided by the instructor. Besides general techniques, special access will be provided to use the VisArgue framework [1] for the analysis of selected datasets.",
"title": ""
},
{
"docid": "b06fd59d5acdf6dd0b896a62f5d8b123",
"text": "BACKGROUND\nHippocampal volume reduction has been reported inconsistently in people with major depression.\n\n\nAIMS\nTo evaluate the interrelationships between hippocampal volumes, memory and key clinical, vascular and genetic risk factors.\n\n\nMETHOD\nTotals of 66 people with depression and 20 control participants underwent magnetic resonance imaging and clinical assessment. Measures of depression severity, psychomotor retardation, verbal and visual memory and vascular and specific genetic risk factors were collected.\n\n\nRESULTS\nReduced hippocampal volumes occurred in older people with depression, those with both early-onset and late-onset disorders and those with the melancholic subtype. Reduced hippocampal volumes were associated with deficits in visual and verbal memory performance.\n\n\nCONCLUSIONS\nAlthough reduced hippocampal volumes are most pronounced in late-onset depression, older people with early-onset disorders also display volume changes and memory loss. No clear vascular or genetic risk factors explain these findings. Hippocampal volume changes may explain how depression emerges as a risk factor to dementia.",
"title": ""
},
{
"docid": "0cb944545afbd19d1441433c621a6d66",
"text": "In this paper, we propose a fine-grained image categorization system with easy deployment. We do not use any object/part annotation (weakly supervised) in the training or in the testing stage, but only class labels for training images. Fine-grained image categorization aims to classify objects with only subtle distinctions (e.g., two breeds of dogs that look alike). Most existing works heavily rely on object/part detectors to build the correspondence between object parts, which require accurate object or object part annotations at least for training images. The need for expensive object annotations prevents the wide usage of these methods. Instead, we propose to generate multi-scale part proposals from object proposals, select useful part proposals, and use them to compute a global image representation for categorization. This is specially designed for the weakly supervised fine-grained categorization task, because useful parts have been shown to play a critical role in existing annotation-dependent works, but accurate part detectors are hard to acquire. With the proposed image representation, we can further detect and visualize the key (most discriminative) parts in objects of different classes. In the experiments, the proposed weakly supervised method achieves comparable or better accuracy than the state-of-the-art weakly supervised methods and most existing annotation-dependent methods on three challenging datasets. Its success suggests that it is not always necessary to learn expensive object/part detectors in fine-grained image categorization.",
"title": ""
},
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
},
{
"docid": "678a90f1dc8fa7926ce15717d48a2659",
"text": "The recent advances in full human body (HB) imaging technology illustrated by the 3D human body scanner (HBS), a device delivering full HB shape data, opened up large perspectives for the deployment of this technology in various fields such as the clothing industry, anthropology, and entertainment. However, these advances also brought challenges on how to process and interpret the data delivered by the HBS in order to bridge the gap between this technology and potential applications. This paper presents a literature survey of research work on HBS data segmentation and modeling aiming at overcoming these challenges, and discusses and evaluates different approaches with respect to several requirements.",
"title": ""
},
{
"docid": "ce020748bd9bc7529036aa41dcd59a92",
"text": "In this paper a new isolated SEPIC converter which is a proper choice for PV applications, is introduced and analyzed. The proposed converter has the advantage of high voltage gain while the switch voltage stress is same as a regular SEPIC converter. The converter operating modes are discussed and design considerations are presented. Also simulation results are illustrated which justifies the theoretical analysis. Finally the proposed converter is improved using active clamp technique.",
"title": ""
}
] |
scidocsrr
|
1988ed183f2ffb98927d4ad0aaff64a5
|
Paxos Quorum Leases: Fast Reads Without Sacrificing Writes
|
[
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
}
] |
[
{
"docid": "b4554b814d889806df0a5ff50fb0e0f8",
"text": "Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the underlying search mechanisms, results management and presentation, and style of input. Each approach impacts upon the quality of the information retrieved and the user’s experience of the search process. However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream IR evaluation with a far less unified approach to the design of evaluation activities. This has led to slow progress and low interest when compared to other established evaluation series, such as TREC for IR or OAEI for Ontology Matching. In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems. Through a discussion of these, we identify their weaknesses and highlight the future need for a more comprehensive evaluation framework that addresses current limitations.",
"title": ""
},
{
"docid": "3fe585dbb422a88f41f1100f9b2dd477",
"text": "Synchronous reluctance motor (SynRM) is a potential candidate for high starting torque requirements of traction drives. Any demagnetization risk is prevented since there is not any permanent magnet on the rotor or stator structure. On the other hand, the high rotor starting current problem, that is common in induction machines is ignored since there is not any winding on the rotor. Indeed, absence of permanent magnet in motor structure and its simplicity leads to lower finished cost in comparison with other competitors. Also high average torque and low ripple content is important in electrical drives employed in electric vehicle applications. High amount of torque ripple is one of the problems of SynRM, which is considered in many researches. In this paper, a new design of the SynRM is proposed in order to reduce the torque ripple while maintaining the average torque. For this purpose, auxiliary flux barriers in the rotor structure are employed that reduce the torque ripple significantly. Proposed design electromagnetic performance is simulated by finite element analysis. It is shown that the proposed design reduces torque ripple significantly without any reduction in average torque.",
"title": ""
},
{
"docid": "2ddc4919771402dabedd2020649d1938",
"text": "Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations",
"title": ""
},
{
"docid": "7548b99b332677e01ca6d74592f62ab1",
"text": "This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the \"RobotCub\" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project \"ITALK\" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.",
"title": ""
},
{
"docid": "2b38ac7d46a1b3555fef49a4e02cac39",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "16a1f15e8e414b59a230fb4a28c53cc7",
"text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.",
"title": ""
},
{
"docid": "5aa39257fd9914cd27abd04d8279d10e",
"text": "Many real-world planning problems require generating plans that maximize the parallelism inherent in a problem. There are a number of partial-order planners that generate such plans; however, in most of these planners it is unclear under what conditions the resulting plans will be correct and whether the plaltner can even find a plan if one exists. This paper identifies the underlying assumptions about when a partial plan can be executed in parallel, defines the classes of parallel plans that can be generated by different partialorder planners, and describes the changes required to turn ucPoP into a parallel execution planner. In \"addition, we describe how this planner can be applied to the problem of query access planning, where parallel execution produces ubstantial reductions in overall execution time.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "6a4815ee043e83994e4345b6f4352198",
"text": "Object detection – the computer vision task dealing with detecting instances of objects of a certain class (e.g ., ’car’, ’plane’, etc.) in images – attracted a lot of attention from the community during the last 5 years. This strong interest can be explained not only by the importance this task has for many applications but also by the phenomenal advances in this area since the arrival of deep convolutional neural networks (DCNN). This article reviews the recent literature on object detection with deep CNN, in a comprehensive way, and provides an in-depth view of these recent advances. The survey covers not only the typical architectures (SSD, YOLO, Faster-RCNN) but also discusses the challenges currently met by the community and goes on to show how the problem of object detection can be extended. This survey also reviews the public datasets and associated state-of-the-art algorithms.",
"title": ""
},
{
"docid": "ccd663355ff6070b3668580150545cea",
"text": "In this paper, the user effects on mobile terminal antennas at 28 GHz are statistically investigated with the parameters of body loss, coverage efficiency, and power in the shadow. The data are obtained from the measurements of 12 users in data and talk modes, with the antenna placed on the top and bottom of the chassis. In the measurements, the users hold the phone naturally. The radiation patterns and shadowing regions are also studied. It is found that a significant amount of power can propagate into the shadow of the user by creeping waves and diffractions. A new metric is defined to characterize this phenomenon. A mean body loss of 3.2–4 dB is expected in talk mode, which is also similar to the data mode with the bottom antenna. A body loss of 1 dB is expected in data mode with the top antenna location. The variation of the body loss between the users at 28 GHz is less than 2 dB, which is much smaller than that of the conventional cellular bands below 3 GHz. The coverage efficiency is significantly reduced in talk mode, but only slightly affected in data mode.",
"title": ""
},
{
"docid": "4d9cf5a29ebb1249772ebb6a393c5a4e",
"text": "This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "afbd52acb39600e8a0804f2140ebf4fc",
"text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.",
"title": ""
},
{
"docid": "a473465e2e567f260089bb39806f79a6",
"text": "The objective of the study presented was to determine the prevalence of oral problems--eg, dental erosion, rough surfaces, pain--among young competitive swimmers in India, because no such studies are reported. Its design was a cross-sectional study with a questionnaire and clinical examination protocols. It was conducted in a community setting on those who were involved in regular swimming in pools. Questionnaires were distributed to swimmers at the 25th State Level Swimming Competition, held at Thane Municipal Corporation's Swimming Pool, India. Those who returned completed questionnaires were also clinically examined. Questionnaires were analyzed and clinical examinations focused on either the presence or absence of dental erosions and rough surfaces. Reported results were on 100 swimmers who met the inclusion criteria. They included 75 males with a mean age of 18.6 ± 6.3 years and 25 females with a mean age of 15.3 ± 7.02 years. Among them, 90% showed dental erosion, 94% exhibited rough surfaces, and 88% were found to be having tooth pain of varying severity. Erosion and rough surfaces were found to be directly proportional to the duration of swimming. The authors concluded that the prevalence of dental erosion, rough surfaces, and pain is found to be very common among competitive swimmers. They recommend that swimmers practice good preventive measures and clinicians evaluate them for possible swimmer's erosion.",
"title": ""
},
{
"docid": "38024169edcf1272efc7013b68d1c5cb",
"text": "Fractal dimension measures the geometrical complexity of images. Lacunarity being a measure of spatial heterogeneity can be used to differentiate between images that have similar fractal dimensions but different appearances. This paper presents a method to combine fractal dimension (FD) and lacunarity for better texture recognition. For the estimation of the fractal dimension an improved algorithm is presented. This algorithm uses new box-counting measure based on the statistical distribution of the gray levels of the ‘‘boxes’’. Also for the lacunarity estimation, new and faster gliding-box method is proposed, which utilizes summed area tables and Levenberg–Marquardt method. Methods are tested using Brodatz texture database (complete set), a subset of the Oulu rotation invariant texture database (Brodatz subset), and UIUC texture database (partial). Results from the tests showed that combining fractal dimension and lacunarity can improve recognition of textures. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "458633abcbb030b9e58e432d5b539950",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "a488509590cd496669bdcc3ce8cc5fe5",
"text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "3baafb85e1b50d759f1a6033295dc9fd",
"text": "A 12-year-old girl with a history of alopecia areata and vitiligo presented with an asymptomatic brownish dirt-like lesion on the left postauricular skin of approximately 3 years of duration. The patient and her mother tried to clean the “dirt” with water and soap without success. There was no history of rapid weight gain. She had no history of an inflammatory dermatosis in the affected area. Physical examination revealed a dirt-like brownish plaque on the left postauricular skin (Figure 1). Rubbing of the lesion with a 70% isopropyl alcohol-soaked gauze pad and pressure resulted in complete disappearance of the lesion (Figure 2). A diagnosis of terra firma-forme dermatosis was, thus, confirmed. Terra firma-forme dermatosis is characterized by an asymptomatic brownish-black, dirt-like patch/plaque. Affected individuals often have normal hygiene habits. Characteristically, the lesion cannot be removed by conventional washing with soap and water but can be removed by wiping with isopropyl alcohol while applying some pressure. Terra firmaforme dermatosis is most frequently seen in prepubertal children and adolescents. It is believed that the condition results from delayed maturation of keratinocytes with incomplete development of keratin squames, and retention of keratinocytes and melanin within the epidermis. Sites of predilection include the neck, followed by the ankles and trunk. The main differential diagnoses are dermatosis neglecta and acanthosis nigricans. Dermatosis neglecta typically affects individuals of any age with neglected hygiene. The lesions can be removed with normal washing with soap and water as well as with alcohol swab or cotton ball. The lesion of acanthosis nigricans consists of dark, velvety thickening of the skin usually on the nape and sides of the neck. The condition is most commonly associated with obesity. The hyperpigmentation or “dirt” cannot be removed either by normal washing with soap and water or alcohol swab or cotton ball. ■",
"title": ""
},
{
"docid": "5764bcf220280c4c3be28375cdcbce26",
"text": "This paper introduces a data-driven process for designing and fabricating materials with desired deformation behavior. Our process starts with measuring deformation properties of base materials. For each base material we acquire a set of example deformations, and we represent the material as a non-linear stress-strain relationship in a finite-element model. We have validated our material measurement process by comparing simulations of arbitrary stacks of base materials with measured deformations of fabricated material stacks. After material measurement, our process continues with designing stacked layers of base materials. We introduce an optimization process that finds the best combination of stacked layers that meets a user's criteria specified by example deformations. Our algorithm employs a number of strategies to prune poor solutions from the combinatorial search space. We demonstrate the complete process by designing and fabricating objects with complex heterogeneous materials using modern multi-material 3D printers.",
"title": ""
},
{
"docid": "9bcc2b61333bd0490857edac99e797c7",
"text": "The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.